new publishing standards in geoinformatics fce ctu editorial dear readers, at the central library of the czech technical university in prague, we were delighted to have been approached by the editor in chief of geoinformatics fce ctu, prof. aleš čepek, with a request to use our library services in the area of scientific journals publishing. geoinformatics fce ctu journal (gi) presents significant research in their discipline. numerous articles published in gi have been cited by journals covered by the citation databases web of science and scopus. the cooperation with the central library brought gi not only the implementation of new publishing standards but also new online presentation. the first significant change was transferring the journal onto the open journal systems (ojs) publishing platform. this journal management system is based on open source software designed for managing and publishing electronic journals. it offers automation of repetitive editorial activities and provides a user friendly website. ojs was developed and is further supported and freely provided within the public knowledge project under the gnu general public license. for its readers, gi through ojs brings a clear and fast presentation of individual articles and issues with a preview of the full texts before downloading them and also an effective way to search in the archives. for the authors, ojs provides simple and transparent environment, where they can submit new manuscripts, monitor progress of the review process, upload the revised version, and upload finalized proofreading. at reviewers’ disposal is a questionnaire and space for comments to the authors and the concluded comments to the editor. the editors take the advantage of the automation of individual editorial process (pre-set e-mails, automatic notification of deadlines, assigning reviewers). all these tools and functionality together enable highly transparent and effective review process. open journal systems further provides the possibility of assigning published articles with doi (digital object identifier), an international and unique persistent identifier of an electronic document, which provides permanent link to the document. since ctu in prague is a member of crossref registration agency via the central library, all contributions in the gi journal are now assigned with dois. both, ojs and doi are further connected with orcid – a unique author’s identifier; authors can specify their orcid when registering to ojs. orcid is used to uniquely link the work to the authors. both, doi and orcid can be advantageously used in databases such as scopus or web of science, and in the social networks environment, such as researchgate. gi is currently indexed by doaj database. evaluation to engineering village, an elsevier platform, is in process. this platform helps companies keep pace with scientific and technological developments and open chances for the future. geoinformatics fce ctu 14(1), 2015, doi:10.14311/gi.14.1.0 5 http://dx.doi.org/10.14311/gi.14.1.0 http://creativecommons.org/licenses/by/4.0/ new publishing standards in geoinformatics fce ctu geoinformatics fce ctu journal now brings its authors all publishing standards used in international journals. to the community of readers in this discipline, journal presents the high-quality articles published in an attractive and user-friendly environment. we wish the readers and editors continuous quality improvement and much success for the future. iva adlerová lenka němečková ctu in prague – central library geoinformatics fce ctu 14(1), 2015 6 http://orcid.org/0000-0002-6287-9212 http://orcid.org/0000-0003-2297-2532 my experience from cvut my experience from cvut sigbjørn herstad department of geomatics1 faculty of engineering science & technology2, norwegian university of technology and science3 e-mail: sigbjohe@stud.ntnu.no introduction through this paper i will try to describe both my background and my studyperiod in prague. if you have further questions or want to know more, don‘t hesitate to contact me. master of science in engineering and ict (information & communication technology) background this branch was created because there are a lot of good engineers in all the engineering branches (not only geomatics) and also people who can handle information technology. however it is not always the reality that a good engineer can handle information technology equally well, and vice versa. therefore there was a need for people who where skilled in both engineering and information technology. this happened in 2002, the same year i started studying at the university. structure of the field of study master of science in engineering and ict (information & communication technology) is a 5 years master program. the first 2 years includes basic engineer and computercourses (mathematics, objectoriented programming, physics, mechanics, databases, fluid-mechanics, statistics, algorithms and computer structures etc.) after the first two years a choice has to be made in which direction you want to go. there is limited number of places on each branch. these are the studybranches possible to choose between: these are the studybranches possible to choose between: 1. energiog prosessteknikk4 [norwegian], energy & process engineering5 [english] 2. geofag og petroleumsteknologi6 [norwegian], petroleum engineering7 [english] 1 http://www.geomatikk.ntnu.no/english/index.html 2 http://www.ivt.ntnu.no/e index.php 3 http://www.ntnu.no/ 4 http://www.studier.ntnu.no/rw index sprog.php?dokid=40b4970b08f821.74711920 5 http://www.ept.ntnu.no/en/ 6 http://www.studier.ntnu.no/rw index sprog.php?dokid=3fe15ace7dbcd1.58219460 7 http://www.ipt.ntnu.no/ geinformatics fce ctu 2006 76 http://www.geomatikk.ntnu.no/english/index.html http://www.ivt.ntnu.no/e%5c_index.php http://www.ntnu.no/ http://www.ntnu.no/ http://www.studier.ntnu.no/rw%5c_index%5c_sprog.php%3fdokid%3d40b4970b08f821.74711920 http://www.ept.ntnu.no/en/ http://www.studier.ntnu.no/rw%5c_index%5c_sprog.php%3fdokid%3d3fe15ace7dbcd1.58219460 http://www.ipt.ntnu.no/ my experience from cvut 3. geomatikk8 [norwegian], geomatics9 [english] 4. konstruksjonsteknikk10 [norwegian], structural engineering11 [english] 5. marin teknikk12 [norwegian], marine civil engineering13 [english] 6. produktutvikling og materialer14 [norwegian], product development & materials15 [english] during the 4th or 5th year you have to decide further in which specialization you want to go. for instance in geomatics you can choose between: � cartography and geographic information sciences16 � geodesy17 � photogrammetry and remote sensing18 for a short introduction to master of science in engineering and the different fields it is possible to study i recommend to visit hybrida19 (a union for the students who study master of science in engineering) and look at the pdf-file20. for the moment it is only in norwegian, but it contains alot of pictures and is therefore easily understandable. experience from studying at nuts (norwegian university of technology and science) in trondheim the studybranch was founded the year i started at the university. since it was the first year, everything was new. most of the courses and the program for the first 3 years was planned. however there have been changes in courses and which year they should be mandatory. i considered applying for the information technology studybranch, but i was afraid that this was only focused on computers and too much technical. i wanted something in between. for me it was very positive to delay the decision about which direction to go for 2 years. with this line everyone have the same courses the 2 first years. this result in 2 more years to decide in which field you really want to study. 8 http://www.studier.ntnu.no/rw index sprog.php?dokid=405868776327e8.28972588 9 http://www.geomatikk.ntnu.no/english/index.html 10 http://www.studier.ntnu.no/rw index sprog.php?dokid=40b49aa561daa2.96785114 11 http://www.bygg.ntnu.no/ktek/ 12 http://www.studier.ntnu.no/rw index sprog.php?dokid=40b498480b55c1.16452366 13 http://www.ivt.ntnu.no/bat/english/mb english/ 14 http://www.studier.ntnu.no/rw index sprog.php?dokid=421259891e9108.53831450 15 http://www.immtek.ntnu.no/engelsk/ 16 http://www.geomatikk.ntnu.no/english/intro/cartography.html 17 http://www.geomatikk.ntnu.no/english/intro/geodesy.html 18 http://www.geomatikk.ntnu.no/english/intro/photogrammetry.html 19 http://www.hybrida.ntnu.no/hybridaweb/skole/ 20 http://www.hybrida.ntnu.no/filer/iikt ivtstyret050204.pdf geinformatics fce ctu 2006 77 http://www.studier.ntnu.no/rw%5c_index%5c_sprog.php%3fdokid%3d405868776327e8.28972588 http://www.geomatikk.ntnu.no/english/index.html http://www.studier.ntnu.no/rw%5c_index%5c_sprog.php%3fdokid%3d40b49aa561daa2.96785114 http://www.bygg.ntnu.no/ktek/ http://www.studier.ntnu.no/rw%5c_index%5c_sprog.php%3fdokid%3d40b498480b55c1.16452366 http://www.ivt.ntnu.no/bat/english/mb%5c_english/ http://www.studier.ntnu.no/rw%5c_index%5c_sprog.php%3fdokid%3d421259891e9108.53831450 http://www.immtek.ntnu.no/engelsk/ http://www.geomatikk.ntnu.no/english/intro/cartography.html http://www.geomatikk.ntnu.no/english/intro/geodesy.html http://www.geomatikk.ntnu.no/english/intro/photogrammetry.html http://www.hybrida.ntnu.no/hybridaweb/skole/ http://www.hybrida.ntnu.no/filer/iikt%5c_ivtstyret050204.pdf my experience from cvut experience from studying at cvut (czech technical university) in prague i was recommended to go to germany or finland because these are countries that have well known universities within geomatics. however i decided to choose to go a different road. i wanted to study somewhere exotic (not in the australian beach-and-surfing-way) and different from norway. i found out that at cvut it would be possible to study my 4th year. i knew it may be difficult to find the subjects needed, but hoped it would be possible to take more computercourses and other courses. that was also one of my intentions to go abroad – finding new courses which i normally would not have discovered. negative experiences � registration for courses. it took some time to find people, departments and the right places to queue for different papers. � not integrating with czech students. since most of the englishcourses are made for erasmus and foreign students there are few or no czech people attending them. integration between czech and foreign students would become easier if it was a better mix between czech and foreign students in classes. � too many courses. it is obligatory to have 30 ects credits each semester in norway. most of the courses in norway have 7,5 ects so 4 courses each semester is common the first years. while at cvut i needed 10 courses to get the same amount of ects. � courses are full or cancelled. many of the courses which where planned opened when applying for a year at cvut were closed. the most popular courses where crowded very early, and therefore difficult to register. � computer rooms and places to study. there seems to be too few computers and places to study. however it doesn‘t seem to be a problem for the foreign students, because most of them have their own computer at the dormitory. positive experiences � select courses from different faculties. the possibility to select courses from all the different faculties is an advantage. � international office is helpful. without the international office i think it would be even more chaotic to find out where to go and how to register, so a big thank to them for helping new students find their place. � international student club21 important for the social life of the students. active not only during the first weeks, but they arrange trips throughout the whole year. � helpful teachers. most of the teachers are very helpful and want to help as much as possible. even though a lot of the teachers are very busy, most of them takes time and shows interest in helping foreign students. 21 http://www.isc.cvut.cz/en/index.php?menu=1 geinformatics fce ctu 2006 78 http://www.isc.cvut.cz/en/index.php%3fmenu%3d1 my experience from cvut � more interesting courses. since it is possible to combine courses from different faculties there are a wide variety of different courses to select between. � different locations. the faculties are spread all around prague, from dejvicka and karlovo namesti to florenc. since i had courses in 3 faculties i had to travel between these locations. first i thought it would be a problem, but it made it easier to explore the city. � meeting other international students. perhaps one of the main reasons why i took a year abroad. easy to get in touch with international students. � new contacts and relations for more studies/work abroad. � traveling. it is easy to travel around in the czech republic and the nearby countries. it is possible to take a week off and travel. suggestions and ideas for further consideration i will try to make some suggestions on how to improve the conditions to get more people to study the branch of geomatics at cvut. it was mostly luck that made me discover cvut and the fact that it was possible to study here. increased cooperation abroad with other universities would be good, both in exchanging students for 1 or 2 semesters, but also perhaps cooperation in courses. for instance 2-3 people from different universities cooperate through the semester (through internet) and present a final project. more accurate information on the homepage would be good. there is a list of subjects, but will all of them be open? is it possible to have more projectoriented courses? it is difficult (but possible) for foreign students to attend czech lessons and courses. i think it is easier for czech people to take part in english lessons than the opposite. for those of the czech students going abroad to study for a period of time it is also and advantage to already have had some courses in english. so perhaps some of the czechcourses can be taught in english? geinformatics fce ctu 2006 79 preface for vol 8 foss4g prague this special issue about the foss4g-cee & geoinformatics 2012, held for the first time in may 2012, is offering selected reviewed papers of the conference. geoinformatics fce ctu, started in 2006 at the department of mapping and cartography, faculty of civil engineering, czech technical university in prague, covered the academic section of foss4g-cee. the acronym foss4g was first introduced in 2004 as an acronym for free and open source software for geoinformatics by a japanese research group in a publication and then used for the grass gis users conference held in 2004 in bangkok, thailand. later on this acronym was transferred to open source geospatial foundation (osgeo.org) for their annual conference. the foss4g-cee 2012 was the first regional foss4g conference in central and east europe. there were more than sixty presentations, six workshops and five tutorials accepted for the conference. number of registered participants was 120 from twenty countries, namely the czech republik (35), romania (14), germany (12), france (6), austria (5), slovakia (4) and estonia, hungary, switzerland, poland, turkey, usa, italy, united kingdom, croatia, rwanda, new zealand, georgia, ghana and nigeria (ranging from 3 to 1 participants). plenary speakers arnulf christl (osgeo) works as systems architect in the spatial domain since the late nineties. his core competencies are distributed spatial data infrastructures, metadata, agile development and free and open source software methodology, deployment and business models. he is president of the open source geospatial foundation (osgeo) which he co-founded in 2006. he is member of the open geospatial consortium (ogc) architecture board coordinating international standard development. he contributes to the european inspire process through his company metaspatial providing consultancy for international sdi projects. in his latest project he worked as technical coordinator of the econtent+ funded project esdin and contributed to the european location framework architecture (1.5mb pdf). esdin is a consortium of 20 european national mapping and cadastre agencies, private industry and academia headed by eurogeographics. geoinformatics fce ctu 8, 2012 3 athina trakas is ogc’s director for european services. in her position she is the contact person for ogc in europe, responsible for the ogc’s activities and networking in europe. this includes connecting with european stakeholder organisations, the european commission and members, supporting regional and national forum activities and planning and managing of ogc outreach and recruitment. she has a diploma in geography and started working in the field of gis in 1998. until 2006 she was responsible for marketing and key account management at ccgis, a consultancy company for gis. in 2006 she joined ogc as director for business development on a part time basis. from 2007 to 2008 athina was also responsible for business development, marketing and international business at wheregroup for free and open source gis and standards. since 2008 she is charter member of the open source geospatial foundation (osgeo). in 2009 she was appointed ogc’s director for european services. helena mitasova (osgeo) is associate professor at the department of marine, earth and atmospheric sciences, and a member of the geospatial science and technology faculty at north carolina state university (ncsu) in raleigh, nc, usa. she has been a member of the open source grass gis development team since 1991 and she co-authored the first book on grass, now in its third edition. she is a member of editorial board for transactions in gis and a charter member of open source geospatial foundation. her research at the university of illinois, us army construction engineering research laboratories in champaign and currently at ncsu has focused on methods for terrain analysis, geospatial modeling and visualization. her phd is from the slovak technical university, bratislava, slovakia. markus neteler (osgeo) is head of the gis and remote sensing unit at the research and innovation centre of the edmund mach foundation, trento, italy. his main research interests are remote sensing for environmental risk assessment, epidemiological gis modelling and free software gis development, the latter since 1993. he is principal gis analyst in several european and national project related to vector-borne diseases and biodiversity. he is author/co-author of two books on the open source geographical information system grass and various papers on gis applications. he is founding-member of the open source geospatial foundation and served in its board of directors from 2006-2011. vasile craciunescu (geo-spatial.org) is a researcher at romanian national meteorological administration, working in the remote sensing & gis laboratory since 2001. he received his diploma in cartography and physical geography in 2001. currently is in charge of the scientific and operational activities in meteo romania related to rapid mapping, air quality data integration, spatial data infrastructure and web mapping. vasile is a foss4g and open data promoter and use his free time to further develop geo-spatial.org, a collaborative effort by and for the romanian community to facilitate the sharing of geospatial knowledge and the discovery and publishing of free geographic datasets and maps. jiri polacek (czech office for surveying, mapping and cadastre) works as head of the cadastral central database (section of the czech office for surveying, mapping and cadastre) since 2000. in this position is responsible for the operational run of the information system of cadastre of real estates (iscre), customer services and data management linked with iscre database. the section is strictly involved in the implementation of the inspire directive for related data themes (as „cadastral parcels“ etc.). since 1975 till 2000 he worked in several researcher positions concerned with it projects, his main research interests are computer graphics a image processing linked with gis. his phd is from the czech technical university, prague. he keeps a longterm cooperation with the university as an external teacher and a member of scientific branch committee. he is a member of presidium of the czech association for geoinformation (cagi). ? ? ? this special issue addresses a series of different topics. in the first paper, jiří poláček and petr souček report about the procedure of implementation of the inspire directive and web services for cadastral related themes over the past years for the czech cadastre. related to this, the paper by anna kratochvílová and václav petráš introduce a new quantum gis plugin for czech cadastral data which supports the new czech cadastral exchange data format. two papers are focusing on geospatial software analysis: the paper by peter löwe introduces a new module for grass gis which enables the user to perform rule-based analysis and workflow management. the article by adéla volfová and martin šmejkal gives a short introduction to r, the free software environment for statistical computing and graphics with focus on the geospatial data analysis functionality and methods. the paper by ikechukwu maduako proposes a heterogeneous sensor database framework with geo-processing spatial analysis functionality integrated in the sensor observation services (sos). in their paper, miloš tichý at al. present the current system of discovery of dangerous near-earth small bodies which have a chance to collide with the earth. the article by r. nétek illustrates how e-learning systems along with open source solutions are used at the palacký university in olomouc, czech republic. markus neteler and aleš čepek editorial board october 2012 interoperabilita v gis podle specifikací ogc interoperabilita v gis podle specifikaćı ogc radek sklenička department of mapping and cartography faculty of civil engineering, ctu in prague e-mail: radek.sklenicka@fsv.cvut.cz kĺıčová slova: open geospatial consortium, geografické informačńı systémy, web processing service, chaining web services abstrakt trendem v oblasti geografických informačńıch systém̊u je přechod z prostřed́ı desktopových produkt̊u k distribuovaným gis systém̊um, založeným převážně na potenciálu webových služeb. v souvislosti s t́ım se hovoř́ı o tzv. interoperabilitě. zásadńım subjektem, který podporuje interoperabilitu v gis, je mezinárodńı neziskové konsorcium open geospatial consorcium,inc. (ogc). konsorcium ogc vyv́ıj́ı specifikace aplikačńıch prostřed́ı a protokol̊u, které umožňuj́ı integraci v rámci aplikaćı, prostorových dat a služeb pro jejich zpracováváńı. jedńım z aktuálně řešených problém̊u je vývoj modelu gis, založeném na možnosti řetězeńı webových gis služeb. úvod vývoj v oblasti geografických informačńıch systémů směřuje k přechodu z prostřed́ı desktopových produkt̊u k distribuovaným gis systémům, založeným převážně na potenciálu webových služeb. v souvislosti s t́ım se hovoř́ı o tzv. interoperabilitě v gis. chápáńı interoperability přesahuje schopnost integrace nesourodých dat r̊uzných datových formát̊u, jde i o integraci na úrovni programových aplikaćı, webových i jiných služeb. zachováváńı interoperability v gis zajǐst’uje vývoj standard̊u a specifikaćı a jejich použ́ıváńı. jde jednak o standardizováńı datových formát̊u a struktur, ale také o standardy a specifikace pro definice výpočetńıch postup̊u, algoritmů, specifikace aplikačńıch rozhrańı, protokol̊u a samozřejmě také webových služeb. jednou z nejd̊uležitěǰśıch organizaćı zabývaj́ıćıch se standardizaćı v geografických informačńıch technologíıch je konsorcium open geospatial consorcium,inc. (zkratka ogc). ogc konsorcium nab́ıźı specifikace pro gis, které jsou zveřejněné a volně př́ıstupné na domovských internetových stránkách ogc [1]. tato otevřenost neńı nepodobná myšlence otevřenosti produkt̊u open source a free software. v současné době si svět geograficḱıch informačńıch systému bez ogc specifikaćı dovede představit jen málokdo. stejně to plat́ı i open source a free software produktech. článek se zaměř́ı na konsorcium ogc, poohlédne se po specifikaćıch běžně použ́ıvaných v praxi, ale také zmı́ńı ty, které se v běžné praxi teprve objev́ı. geinformatics fce ctu 2006 170 interoperabilita v gis podle specifikací ogc open geospatial consortium, inc. zásadńı vliv na specifikace a standardizaci v gis má několik konsorcíı. např́ıklad konsorcium w3c (world wide web consortium) [3] se sice př́ımo standardizaćı v geoinformatice nezabývá, ale má pro tento obor velký význam, nebot’ má zásadńı vliv na vývoj interoperability ve webových technologíıch v̊ubec. mezi nejd̊uležitěǰśı subjekty, které se př́ımo standardizaćı v geoinformatice zabývaj́ı, patř́ı iso (international organization for standardization), inspire (the infrastructure for spatial information in europe) a snad nejzásadněǰśı vliv má ogc (open geospatial consortium), viz. [1]. ogc je mezinárodńı pr̊umyslové neziskové konsorcium v́ıce než 300 obchodńıch společnost́ı, univerzit a vládńıch organizaćı, které společně usiluj́ı o interoperabilitu v oblasti geografických informačńıch systémů a tzv. “location base” službách. ogc bylo založeno v roce 1994. ogc vyv́ıj́ı specifikace aplikačńıch rozhrańı a protokol̊u, které umožňuj́ı interoperabilitu v rámci aplikaćı, prostorových dat a služeb tzv. “geoprocessingu”, tak jak je uvedeno v [2], v poznámce o ogc specification program. vznik ogc specifikaćı vznik ogc specifikaćı má jasně daný postup, daný směrnićı konsorcia. předt́ım, než se řešený problém stane určitou ogc specifikaćı, projde širokou škálou fáźı vývoje, diskuśı, praktickým testováńım. něž vyjde dokument s oficiálńım statutem ogc specifikace, předcháźı mu dokumenty s označeńım např. discussion papers, recommendation papers. v praxi běžně použ́ıvané ogc specifikace pro připomenut́ı si uvedeme několik běžně použ́ıvaných ogc specifikaćı; jde převážně o specifikace webových mapových služeb a dále specifikace datových formát̊u, definićı styl̊u a definic základńıch grafických objekt̊u, které se v gis vyskytuj́ı. všechny specifikace jsou k dispozici na domovských internetových stránkách ogc, viz. [1]. wms snad nejběžněji využ́ıvanou specifikaćı ogc konsorcia je dnes již všudypř́ıtomná specifikace wms (web map service) webové služby, poskytuj́ıćı mapy v rastrovém formátu. aby nedošlo k omylu; server se službou wms neobsahuje pouze rastrová data, ale také vektorová data, často uložená v dbms. služba po požadavku klienta na mapový obsah vybere potřebná prostorová data a z těchto pak vygeneruje rastrový obraz a odešle jej. wfs naproti tomu služba wfs (web feature service) poskytuje i vektorová prostorová data v datovém formátu gml (geographic markup language), který je daľśı specifikaćı ogc. umožňuje tedy na rozd́ıl od wms editaci prostorových dat na straně klienta. sld geinformatics fce ctu 2006 171 interoperabilita v gis podle specifikací ogc sld (style layer descriptor), jak již název napov́ıdá, definuje možnosti volby styl̊u poskytovaných datových vrstev, které si uživatel podle potřeby nadefinuje. sld rozšǐruje možnosti wms. sfs sfs (simple features specification) určuje zp̊usob definice základńıch grafických objekt̊u, které se v gis vyskytuj́ı (bod̊u, liníı, polygon̊u, povrch̊u ...) a dále potom základńı prostorové vazby mezi nimi (pr̊useč́ık, překryt́ı, styk, ...). existuj́ı tři implementačńı specifikace pro rozhrańı ole/com, corba a dotazovaćı jazyk sql. z geodetického pohledu zaj́ımavé specifikace z geodetického pohledu jsou zaj́ımavé specifikace, které se zabývaj́ı otázkou souřadnicových referenčńıch systémů a transformacemi souřadnic. spatial referencing by coordinates tato ogc specifikace zároveň supluje navrhovaný mezinárodńı iso standard 19111 geographic information — spatial referencing by coordinates. definuje souřadnicové referenčńı systémy a operace mezi nimi. tak jak je známo z geodézie, definuje např. referenčńı elipsoid, geodetické datum, geoid, geocentrické souřadnice, elipsoidické výšky, atd. coordinate transformation services implementačńı ogc specifikace definuj́ıćı aplikačńı rozhrańı pro práci se souřadnicovými systémy a transformacemi mezi souřadnicovými systémy. existuj́ı implementace pro java tř́ıdy a pro rozhrańı corba a com. tato specifikace vlastně ukazuje programátor̊um, jakým zp̊usobem vyv́ıjet software pro operace s souřadnicovými systémy. existuj́ıćı kompletńı implementace této specifikace je obsažena v javovské sadě tř́ıd pro vývoj gis aplikaćı geotools, viz. http://www.geotools.org/. web coordinate transformation service (wcts) jde o návrh implementačńı specifikace (zat́ım tzv. discussion paper ) pro webovou službu, která poskytuje transformace mezi souřadnicovými systémy. dle základńı architektury webových služeb ogc konsorcia (ows), viz. dále, poskytuje služba jednak základńı popis nab́ıdky svých možnost́ı, jako např. podporované souřadnicové systémy, podporované operace mezi zvolenými souřadnicovými systémy a také umožňuje provést zvolenou transformaci. to vše přes dotazy getcapabilities, istransformable, transform. implementaci wcts vyv́ıj́ı např. známý open source vývojář frank warmerdam, viz. [4]. daľśı implementaci wcts lze nalézt např́ıklad na adrese http://geobrain.laits.gmu.edu/cgibin/wcts/wcts. geinformatics fce ctu 2006 172 http://www.geotools.org/. http://geobrain.laits.gmu.edu/cgi-bin/wcts/wcts. http://geobrain.laits.gmu.edu/cgi-bin/wcts/wcts. interoperabilita v gis podle specifikací ogc opengis web services (ows) architektura ve výše uvedených sekćıch byly vypsány některé ogc specifikace webových služeb, které se již běžně v praxi využ́ıvaj́ı nebo jsou ve fázi testováńı a vývoje. mohli bychom jistě připojit i daľśı, např́ıklad wcs (web coverage service) služba se již také běžně využ́ıvá. pro ilustraci uved’me ještě wts (web terrain service), web3d (web 3d service), wrs (web registry server). specifikaćı webových služeb stále přibývá a je vhodné, aby měly nějaký společný definičńı rámec. proto existuje specifikace opengis web services common specification (ows), která tento obecný rámec pro webové specifikace definuje. specifikuje několik aspekt̊u společných pro implementace webových služeb. jde o rámec určitých daných parametr̊u a obsahu klientských požadavk̊u (např. getcapabilities) a datových struktur, které služba vraćı (requests and responses). nad tento společný rámec definuje implementace konkrétńı služby své vlastńı parametry a strukturu dat. pro ilustraci uved’me požadavek, který vraćı souhrnný popis dané služby. jde o známý dotaz getcapabilities, který lze nalézt v implementaćıch služeb wms, wfs, wcs, wps, wcts a daľśıch. web processing service (wps) navrhovaná specifikace (zat́ım tzv. discussion paper ) webové služby poskytuje přes webové rozhrańı př́ıstup k širš́ı škále gis operaćı. rozšǐruje možnosti od pouhého poskytováńı a prezentaci prostorových dat, k možnostem jejich zpracováńı a prováděńı r̊uzných výpočetńıch úkon̊u. služba je zaměřena na zpracováváńı rastrových a vektorových prostorových dat. wps nespecifikuje konkrétńı úlohu a konkrétńı požadovaná vstupńı a výstupńı data, ale poskytuje obecný mechanismus k popisu široké škály r̊uzných výpočetńıch úkon̊u, obecný mechanismus pro popis potřebných vstupńıch a výstupńıch dat požadovaných klientem služby. v souladu s architekturou ows, operuje klient s prostředky služby prostřednictv́ım následuj́ıćıch tř́ı operaćı. getcapabilities tato operace vraćı popis (v xml dokumentu) služby, výčet dostupných výpočetńıch proces̊u a jejich verźı. dotaz tohoto požadavku vypadá následovně: http://server.foo/foo?service=wps&request=getcapabilities&version=0.2.1 describeprocess na tento požadavek vraćı server detailńı popis jednoho či v́ıce dostupných proces̊u, spolu s popisem vstupńıch a výstupńıch dat. pro ilustraci opět úplný zápis dotazu: http://server.foo/foo?service=wps&request=describeprocess&version=0.2.1 &processname=xxx execute geinformatics fce ctu 2006 173 interoperabilita v gis podle specifikací ogc execute spust́ı požadovaný proces (výpočetńı úkon) a vrát́ı požadovaná výstupńı data. chaining web services daľśım krokem kupředu je možnost řetězeńı webových služeb, tzv. “chaining web services”. snahou je založit webové služby na společných specifikaćıch a standardech a tak umožnit jejich spojováńı na úrovni server server, a dále jejich kaskádováńı a r̊uzné kombinováńı. úroveň tohoto spojováńı klade velký d̊uraz na precizńı popis jednotlivých služeb. do hry přicháźı jazyky a rozhrańı pro popis webových služeb jako je wsdl (wed service description language) [5], uddi (universal description, discovery and integration) [6]. hovoř́ı se o ontologii a o semantickém webu v̊ubec. v př́ıpadě ogc uved’me př́ıklad navrhované specifikace ve stádiu tzv. discussion paper, a to ows 2 common architecture: wsdl soap uddi. závěr přechod z prostřed́ı desktopových produkt̊u k distribuovaným gis systémům využ́ıvaj́ıćıch webových služeb jde ruku v ruce s rychlým vývojem webových technologíı. nutnost́ı je zachováńı interoperability, postavené na definováńı specifikaćı a standard̊u a jejich použ́ıváńı. hlavńım subjektem, který se zabývá specifikacemi v oblasti gis, je konsorcium open geospatial consortium, inc. (ogc). základńı architekturu specifikaćı ogc webových služeb tvoř́ı opengis web services common specification (ows), která vytvář́ı společný obecný rámec pro webové služby. vyv́ıj́ı se specifikace služeb wps (web processing service), které dále rozš́ı̌ŕı možnosti využit́ı funkčnosti gis v prostřed́ı webu. do fáze testováńı vcháźı technologie řetězeńı webových gis služeb. model distribuovaných gis, založený na možnosti r̊uzných kombinaćı webových služeb, umožńı uživatel̊um flexibilně vytvářet vlastńı gis řešeńı. tento model je založen na nejnověǰśıch a rapidně se vyv́ıjej́ıćıch webových technologíıch a tak kam p̊ujde vývoj ve kterých určitých př́ıpadech ukáže teprve čas. reference 1. open geospatial consortium,inc.1 open geospatial consortium,inc. home page [200605-10] 2. technical committee policies and procedures, document ogc 05-020r4, open geospatial consortium,inc., 2005, url: online2 [2006-05-10] 3. w3c worl wide web consortium3 w3c worl wide web consortium home page [2006-05-10] 1 http://www.opengeospatial.org/ 2 http://www.opengeospatial.org/about/?page=tcpp 3 http://www.w3.org/ geinformatics fce ctu 2006 174 http://www.opengeospatial.org/ http://www.opengeospatial.org/about/?page=tcpp http://www.w3.org/ interoperabilita v gis podle specifikací ogc 4. ogr wcts implementation4 ogr wcts implementation home page [2006-05-10] 5. e. christensen, f. curbera, g. meredith, and s. weerawarana, “web services description language (wsdl) 1.1”, w3c note, 2001, url: online5 [2006-05-10] 6. uddi project6 uddi project home page [2006-05-10] 4 http://home.gdal.org/projects/wcts/ 5 http://www.w3.org/tr/wsdl 6 http://www.uddi.org geinformatics fce ctu 2006 175 http://home.gdal.org/projects/wcts/ http://www.w3.org/tr/wsdl http://www.uddi.org motivace pro nasazení free software gis ve výuce geoinformatiky motivace pro nasazeńı free software gis ve výuce geoinformatiky martin landa department of mapping and cartography faculty of civil engineering, ctu in prague e-mail: martin.landa@fsv.cvut.cz abstrakt cı́lem tohoto př́ıspěvku je prezentovat využit́ı svobodného softwaru při výuce na studijńım oboru geodézie a kartografie čvut v praze a předevš́ım motivaci pokračovat v tomto trendu při výuce geoinformatiky a to v souvislosti s novým oborem na čvut geoinformatikou. současně jsou v textu nast́ıněny základńı aspekty výuky gis s d̊urazem na volně šiřitelné nástroje a geoprostorová data. na závěr jsou zmı́něny praktické zkušenosti s nasazeńım svobodného softwaru na cvičeńıch k předmětu zpracováńı obrazových dat. dosavadńı využit́ı gnu nástroj̊u na studijńım oboru geodézie a kartografie gnu aplikace (či obecně programy s otevřeným zdrojovým kódem) maj́ı v učebńım procesu (viz studijńı plán [6]) na oboru g+k (geodezie a kartografie) poměrně bohatou historii. v tomto ohledu nelze ani v nejmenš́ım opomenout zásluhy prof. aleše čepka bez jeho nasazeńı by zcela jistě k něčemu podobnému v̊ubec nedošlo. v rámci předmětu informatika 1 se studenti seznamuj́ı s operačńım systém (os) gnu/linux. je poměrně zaj́ımavé, že se právě zde viditelně profiluj́ı nadpr̊uměrńı studenti. tento předmět je nosný pro celou řadu daľśıch předmět̊u. jde předevš́ım o povinné předměty informatika 2 a 3, kde se vyučuj́ı základy programovańı v jazyce c++. výuka na cvičeńıch potom prob́ıhá pod os gnu/linux, studenti běžně pracuj́ı s textovým editorem gnu emacs1, kompilátorem gnu g++2 a ve výjimečných př́ıpadech i s debuggerem gnu gdb3 (či s ddd4). dále se využ́ıvá programový baĺık gnu octave5 pro řešeńı výpočetně náročných úloh v rámci předmětu vyšš́ı geodézie. vedle os gnu/linux, sady nástroj̊u pro programováńı (textový editor, kompilátor, debugger) a programu gnu octave primárně určeného pro numerické výpočty se pod hlavičkou předmětu zpracováńı obrazových dat [8] využ́ıvá grass gis6. v současnosti je to pravděpodobně jediný svobodný gis software, který se na oboru g+k při výuce použ́ıvá. 1 http://www.gnu.org/software/emacs 2 http://gcc.gnu.org 3 http://www.gnu.org/software/gdb 4 http://www.gnu.org/software/ddd 5 http://www.gnu.org/software/octave 6 http://grass.itc.it geinformatics fce ctu 2006 158 http://www.gnu.org/software/emacs http://gcc.gnu.org http://www.gnu.org/software/gdb http://www.gnu.org/software/ddd http://www.gnu.org/software/octave http://grass.itc.it motivace pro nasazení free software gis ve výuce geoinformatiky svobodný software jako jeden z piĺı̌r̊u studijńıho plánu oboru geoinformatika v zimńım semestru 2006/2007 se otev́ırá na fsv čvut nový bakalářský studijńı obor geoinformatika. o rok později bude nastartován navazuj́ıćı magisterský studijńı obor. o motivaci pro otevřeńı tohoto oboru na čvut bĺıže pojednává [1]. po bližš́ım prostudováńı doporučeného studijńıho plánu [7] je poměrně zřetelná spojitá linie předmět̊u s d̊urazem na svobodný software a jeho využit́ı v praxi. studijńı obor geoinformatika stoj́ı na pevně definovaných piĺı̌ŕıch teoretické geodézii, katastru nemovitost́ı a předevš́ım informatice jako takové. geoinformatika je totiž předevš́ım geoprostorově orientovanou informatikou, na výuku informatiky by tak měly být kladeny ty nejvyšš́ı požadavky. při sestavováńı studijńıho plánu byl na tento fakt kladen co možná největš́ı d̊uraz. omeźıme-li se na informatické předměty souvisej́ıćı s osvětou v oblasti svobodného softwaru v prvńım semestru si studenti osvoj́ı základy práce s os gnu/linux, v rámci předmětu algoritmy a základy numerické matematiky bude využit s největš́ı pravděpodobnost́ı programovaćı jazyk python7. v druhém semestru se studenti bĺıže seznámı́ s architekturou a návrhem databázových systémů s d̊urazem na relačńı dbms, na cvičeńıch bude primárně využit postgresql8. studenti tak źıskaj́ı nutný základ pro absolvováńı navazuj́ıćıch předmět̊u (gis druhé a třet́ı generace, geodatabáze, programovańı pro dbms, webové mapové služby, atd.). jednou ze základńıch dovednost́ı absolventa technického oboru by měla být schopnost aplikovat jednoduché programovaćı techniky (např. skriptováńı). to bohužel mnohdy neplat́ı, v př́ıpadě studenta oboru geoinformatika je neznalost programováńı zcela zásadńı, téměř diskvalifikačńı. proto je na programovańı ve studijńıch plánech kladen tak velký d̊uraz. základńı programovaćı aparát studenta oboru geoinformatika budou tvořit jazyky c++, java a python (tj. hybridńı objektově orientované programovaćı jazyky). student magisterského oboru si dokonce může zapsat i předmět, který se orientuje čistě na svobodný software v geoinformatice povinně volitelný free software gis. role svobodného softwaru při výuce gis vedle obecně rozš́ı̌rených proprietárńıch systémů hraje free software / open source software významnou roli při adaptaci technologie gis. poskytuje př́ıstup k technologii uživatel̊um, kteř́ı si z nejr̊uzněǰśıch d̊uvod̊u nemohou dovolit použ́ıvat proprietárńı systémy. nav́ıc rozmanitost v př́ıstupu k vývoji softwaru je zásadńı pro pokračuj́ıćı inovaci v oblasti geoinformačńıch technologíı. model vývoje svobodného softwaru přináš́ı velmi d̊uležitý aspekt potřebu komunikace a to jak v rámci komunity jako takové, tak i mimo ni v širš́ım kontextu. dokladem toho je právě vzniknuvš́ı nadace pro podporu open source gis9. 7 http://www.python.org 8 http://www.postgresql.org 9 http://www.osgeo.org geinformatics fce ctu 2006 159 http://www.python.org http://www.postgresql.org http://www.osgeo.org motivace pro nasazení free software gis ve výuce geoinformatiky vedle svobodného softwaru nelze opomenout d̊uležitost volně dostupných geoprostorových dat. zat́ımco v u.s.a. je celá škála geodat poskytována zcela zdarma, v evropě tomu tak neńı, ba naopak v této oblasti zde existuj́ı poměrně značné restrikce. tento restriktivńı př́ıstup neoddiskutovatelně bráńı daľśımu vývoji a výrazně znesnadňuje dostupnost informaćı. v evropě bohužel neexistuje tradice volného sd́ıleńı výsledk̊u nejr̊uzněǰśıch projekt̊u a to nejen z oblasti gis. problematika dostupnosti geodat by měla být předmětem veřejné diskuze a to zejména s ohledem na zdroje volně dostupných dat. vedle možnosti volně použ́ıvat software stoj́ı potřeba svobodné datové základny. postupné doplňováńı výchoźıho datasetu na cvičeńıch gis z daľśıch (on-line ’public domain’) datových zdroj̊u lze považovat za pozitivńı př́ınos studijńıho procesu. jeden z daľśıch motivačńıch aspekt̊u může být zapojeńı student̊u do vývoje softwaru (na nejr̊uzněǰśıch úrovńıch). otev́ırá se tak cesta k řešeńı nejr̊uzněǰśıch projekt̊u a možnosti prezentovat jejich výsledky samotnými studenty nejen v české republice, ale i v mezinárodńım měř́ıtku. vedle obecných základ̊u gis by se měli studenti seznámit jak s proprietárńımi, tak otevřenými programovými systémy. svobodný software by měl být podporován a rozšǐrován právě na akademické p̊udě. grass gis jako nástroj pro zpracováńı obrazových dat grass gis se úspěšně použ́ıvá od akademického roku 2003/2004 na cvičeńıch k předmětu zpracováńı obrazových dat [8]. pro jeho nasazeńı hovořilo hned několik d̊uvod̊u, zejména licenčńı problémy s doposud použ́ıvaným proprietárńım softwarem. na začátku tohoto akademického roku byla nově spuštěna geowikicz10 jako nástroj pro prezentaci studijńıho programu g+k (který v současné době pokrývá dva studijńı obory geodézii a kartografii a nově i geoinformatiku). o motivaci a zkušenostech s využit́ı wiki jako nástroje pro skupinovou správu webových stránek s ohledem na akademické prostřed́ı bĺıže pojednává [3]. když během minulého zimńıho semestru vznikaly návody na cvičeńı k předmětu zpracováńı obrazových dat, nebylo pochyb, kam tyto texty umı́stit na geowikicz. tématické zaměřeńı bylo v podstatě převzato z předchoźıch semestr̊u: � seznámeńı s architekturou gisu grass, základńı terminologie, vizualizace dat � základńı metody zvýrazněńı obrazu – roztažeńı histogramu – barevné syntézy, modely barev rgb a ihs – mapová algebra – filtrace obrazu � import/export dat, georeferencováńı obrazových dat � fourierova transformace 10 http://gama.fsv.cvut.cz geinformatics fce ctu 2006 160 http://gama.fsv.cvut.cz motivace pro nasazení free software gis ve výuce geoinformatiky � ř́ızená a neř́ızená klasifikace obrazových dat jako datový podklad byl použit z větš́ı části dataset z minulého akademického roku, doplněný o několik datových vrstev z datasetu freegeodatacz11. jednou z úloh, kterou studenti řešili, byl import souřadnicově připojených a nepřipojených obrazových dat. základńı družicová scéna pokrývaj́ıćı zájmové územı́ (severozápadńı čechy) landsat5-tm z roku 1994 tak byla doplněna o sńımek z nosiče landsat7-etm+ (2004) a landsat1-mss (1975), viz obr.1. obr č.1: rgb barevná syntéza 243: landsat mss (1975), tm (1994), etm+ (2004) př́ıprava učebńıho textu podobného rozsahu sebou přináš́ı vedle pozitivńıch i mı́rně negativńı dopady. studenti jistě ocenili možnost se předem připravit na dané cvičeńı. na druhou stranu byla u některých student̊u znatelná tendence typu zkoṕırovat př́ıkaz z webové stránky do př́ıkazové konzole, spustit jej a o nic v́ıc se nestarat. takových jedinc̊u však byla menšina a i oni dř́ıve nebo později narazili na problém, který byli nuceni řešit. bylo potěšuj́ıćı a do jisté mı́ry zcela jistě motivuj́ıćı sledovat zańıceńı a odborný r̊ust student̊u. prvńı kroky v grassu, potažmo v os gnu/linux (drtivá většina z nich totiž neabsolvovala informatiku 1 ve stávaj́ıćı podobě, s os gnu/linux se tedy setkali v mnoha př́ıpadech poprvé v životě) jistě nebyly jednoduché či snadné. na posledńıch cvičeńıch byli téměř všichni schopni pracovat v prostřed́ı grassu bez znatelněǰśıch problémů. ba dokonce někteř́ı ze student̊u projevili zájem si nainstalovat grass př́ımo na svém osobńım poč́ıtači. bylo by škoda nezmı́nit i daľśı fakt. během výuky se narazilo na řadu softwarově orientovaných problému či nedostatk̊u a to jak ze strany student̊u, tak vyučuj́ıćıho. tyto nedostatky pomalu ale jistě autor článku řeš́ı. to lze považovat svým zp̊usobem za př́ınosné použ́ıvámeli při výuce svobodný software, v podstatě nic nám nebráńı nalezené nedostatky či chyby 11 http://grass.fsv.cvut.cz/wiki/index.php/geodata cz geinformatics fce ctu 2006 161 http://grass.fsv.cvut.cz/wiki/index.php/geodata%5c_cz motivace pro nasazení free software gis ve výuce geoinformatiky odstraňovat a přispět tak ke zkvalitněńı celého softwarového projektu. plány do budoucna během př́ıpravy na zimńı semestr 2006/2007 se poč́ıtá s poměrně výrazným rozš́ı̌reńım učebńıho textu, a to jak s ohledem na grass gis, tak na ostatńı programové nástroje z rodiny svobodného softwaru. půjde pravděpodobně o open source software image map12 (ossim) a baĺık pro statistické výpočty r13. závěr svobodný software má jistě v učebńım procesu na vysokých školách svoje pevné mı́sto. v mnoha př́ıpadech je možnost studovat zdrojový kód (tj. detailńı znalost jak je daná úloha přesně řešena) velmi potřebná až téměř nezastupitelná. výuka geoinformatiky by měla být obecně orientována na standardy a jejich prosazováńı. projekty řešené na akademické p̊udě by měly být v ideálńım př́ıpadě zaměřeny na volně šǐritelný software a jeho daľśı zdokonalováńı. v žádném př́ıpadě nelze omluvit využit́ı státńıch dotaćı v souvislosti s uzavřenými, silně komerčńımi systémy. reference 1. leoš mervart and aleš čepek. geoinformatics study at the czech technical university in prague14. in from pharaohs to geoinformatics (fig working week 2005 and gsdi8). fédération internationale des géomètres (international federation of surveyors), april 16-21, cairo, egypt 2005 2. m. landa. grass jako pomůcka při výuce gis a dpz15. in konference gis ostrava 2005, 23.-26. ledna 2005. 3. j. pytel and m. landa. možnosti systému wiki při správě informačńıch zdroj̊u16. in belcom 06, 6.-7. února 2006. 4. j. nieminen. teaching gis the gnu way. in open source free software gis grass users conference 2002, trento, italy, 11-13 september 2002 5. mitasova helena, neteler markus. freedom in geoinformation science and software development: a grass gis contribution17. in open source free software gis grass users conference 2002, trento, italy, 11-13 september 2002 12 http://www.ossim.org 13 http://www.r-project.org 14 http://geoinformatika.fsv.cvut.cz/2005/ap-2005-mervart-cepek/ap-2005-mervart-cepek.pdf 15 http://gamam.fsv.cvut.cz/cgi-bin/viewcvs.cgi/*checkout*/publications/2005/gis ostrava 05/ ref grass go05.pdf?root=cvs landa 16 http://gamam.fsv.cvut.cz/cgi-bin/viewcvs.cgi/*checkout*/publications/2006/belcom 06/ pytel-landa wiki.pdf?rev=1.1&root=cvs landa 17 http://www.ing.unitn.it/~grass/conferences/grass2002/proceedings/proceedings/pdfs/ mitasova helena 3.pdf geinformatics fce ctu 2006 162 http://www.ossim.org http://www.r-project.org http://geoinformatika.fsv.cvut.cz/2005/ap-2005-mervart-cepek/ap-2005-mervart-cepek.pdf http://geoinformatika.fsv.cvut.cz/2005/ap-2005-mervart-cepek/ap-2005-mervart-cepek.pdf http://gamam.fsv.cvut.cz/cgi-bin/viewcvs.cgi/%2acheckout%2a/publications/2005/gis%5c_ostrava%5c_05/ref%5c_grass%5c_go05.pdf%3froot%3dcvs%5c_landa http://gamam.fsv.cvut.cz/cgi-bin/viewcvs.cgi/%2acheckout%2a/publications/2006/belcom%5c_06/pytel-landa%5c_wiki.pdf%3frev%3d1.1%5c%26root%3dcvs%5c_landa http://www.ing.unitn.it/%7egrass/conferences/grass2002/proceedings/proceedings/pdfs/mitasova%5c_helena%5c_3.pdf http://www.ing.unitn.it/%7egrass/conferences/grass2002/proceedings/proceedings/pdfs/mitasova%5c_helena%5c_3.pdf motivace pro nasazení free software gis ve výuce geoinformatiky 6. doporučený studijńı plán oboru geodézie a kartografie 7. doporučený studijńı plán oboru geoinformatika 8. návody na cvičeńı k předmětu zpracováńı obrazových dat geinformatics fce ctu 2006 163 http://gama.fsv.cvut.cz/wiki/index.php/doporu%c4%8den%c3%bd_studijn%c3%ad_pl%c3%a1n_oboru_geod%c3%a9zie_a_kartografie http://gama.fsv.cvut.cz/wiki/index.php/doporu%c4%8den%c3%bd_studijn%c3%ad_pl%c3%a1n_oboru_geoinformatika http://gama.fsv.cvut.cz/wiki/index.php/zpracov%c3%a1n%c3%ad_obrazov%c3%bdch_dat historické mapy v prostředí mapového serveru historické mapy v prostřed́ı mapového serveru jǐŕı cajthaml department of mapping and cartography faculty of civil engineering, ctu in prague e-mail: jiri.cajthaml@fsv.cvut.cz abstrakt tento př́ıspěvek se zabývá možnostmi zobrazováńı historických map na internetu. tyto mapy často lež́ı v archivech, kde jsou pro běžné uživatele těžko př́ıstupné. v souvislosti s rozvojem informačńıch technologíı docháźı nyńı k převodu map do digitálńı podoby a jejich archivaci na digitálńıch médíıch. současná techniologie jde však ještě dál. data mohou být publikována na internetu. tam mohou být velice jednoduše k dispozici všem, jak odborńık̊um tak laik̊um. kromě toho, že mohou být data na internetu prohĺı̌zena, s nástupem webových služeb je nyńı možné data distribuovat mezi r̊uznými aplikacemi. webové mapové služby jsou budoucnost́ı webové kartografie. m̊uj konkrétńı ukázkový projekt souviśı s daty ii. vojenského mapováńı rakouska-uherska (1819-1858). tato data jsem publikoval pomoćı mapového serveru na internetu. data jsou volně př́ıstupná a to nejen pro prohĺı̌zeńı, ale jsou i distribuována pomoćı služby wms. zdroj digitálńıch dat skenováńı map analogové mapy je nutné převést do digitálńı formy skenováńım. skenováńı je založeno na principu sńımáńı obrazových element̊u, zpravidla ve třech barevných složkách (systém barev rgb). při skenováńı vystupuj́ı dvě velice d̊uležité konstanty – hustota skenováńı a barevná hloubka. v laboratoři digitálńı kartografie na stavebńı fakultě, čvut v praze jsem provedl řadu pokusných skenováńı s r̊uznou hustotou i barevnou hloubkou. k dispozici zde máme bubnový velkoformátový skener contex chameleon tx36. hustota skenováńı je často udávána v jednotkách dpi (dot per inch, bod̊u na palec). pro skenováńı map se jev́ı nejvhodněǰśı hodnota někde mezi 300dpi a 500dpi. při vyšš́ı hustotě skenováńı již nedocháźı k výraznému vylepšeńı obrazu, pod 300dpi docháźı často k degradaci text̊u na mapách. konkrétńı nejvhodněǰśı hustota ale vždy záviśı na konkrétńı mapě (velikost ṕısma, čitelnost mapy,...). barevná hloubka velice významně ovlivňuje kvalitu digitálńıho obrazu. pro skenováńı barevných map jsou nejpouž́ıvaněǰśı dvě metody: 8-bitové sńımáńı do barevné palety nebo 24bitové sńımáńı pro složky rgb (tzv. “true color“). méně často se použ́ıvá 16-bitové sńımáńı, kde zelené složce odpov́ıdá 6 bit̊u, modré a červené pak po 5 bitech. skenováńı do 8-bitové palety znamená, že výsledný obraz bude složen pouze z 256 barev (28 bit̊u). většinou toto skenováńı prob́ıhá dvoufázově. skener otestuje mapu a zvoĺı 256 barev palety. poté je každému sńımanému bodu přǐrazena nejbližš́ı hodnota z palety. výhodou tohoto sńımáńı je úspora velikosti dat (třetinová velikost oproti “true color“). 24-bitové sńımáńı představuje kvalitněǰśı možnost, kde každému obrazovému elementu (pixelu) je možné přǐradit jednu z přibližně 16,7 geinformatics fce ctu 2006 95 historické mapy v prostředí mapového serveru milónu barev (24 bit̊u). skenováńı v této barevné hloubce se hod́ı zvláště tam, kde bude následně použita jpeg komprese dat. ta totiž využ́ıvá právě “true color“. pokud jde o mapy ii. vojenského mapováńı, které jsem použ́ıval, ty byly neskenovány př́ımo ve v́ıdeňském státńım archivu pro ministerstvo životńıho prostřed́ı čr1. s těmito daty pak v čr pracuje laboratoř geoinformatiky ujep v mostě2. s t́ımto pracovǐstěm spolupracujeme na katedře mapováńı a kartografie. mapy byly ve vı́dni neskenovány s hloubkou 400dpi a v 8-bitové paletě. sám jsem tedy nemohl ovlivnit parametry skenováńı. sám bych raději navrhoval skenováńı v “true color“ a př́ıpadnou kompresi dat. skenováńı v 8 bitech bylo provedeno patrně kv̊uli úspoře mı́sta na datových médíıch. hustota skenováńı 400dpi je podle mě ideálńı. komprese dat po vlastńım neskenováńı je možné data zkomprimovat tak, aby se zmenšila celková velikost souboru. rozlǐsujeme 2 základńı typy komprese – bezeztrátovou a ztrátovou. při bezeztrátové kompresi můžeme zpětným postupem źıskat opět originálńı obraz, při ztrátové nikoliv. teorii kompresńıch algoritmů zde patrně nemá smysl popisovat. pro konkrétńı data ii. vojenského mapováńı byla použita bezeztrátová komprese lzw. zároveň byla na zkoušku data převedena do “true color“ obrazu s jpeg kompreśı. touto ztrátovou kompreśı sice obraz degradujeme, nicméně ušetř́ıme velké množstv́ı mı́sta a dále zvýš́ıme rychlost aplikace, která bude s daty pracovat. naskenovaná data by měla z̊ustat nedegradovaná v archivu, pro práci v aplikaci však doporučuji kompresi použ́ıt. georeferencováńı data historických vojenských mapováńı představuj́ı listy jednotlivých mapových sekćı. každý mapový list obsahuje kromě mapového pole také mapový rám, nadpis a daľśı mimorámové údaje. pokud chceme pracovat s bezešvou mapou na územı́ celé čr, je nutné před samotným georeferencováńım rastry ořezat podle mapových rámů. georeferencováńı představuje umı́stěńı rastrového obrazu do souřadnicového systému. rastrový soubor může nést informaci o své poloze bud’ př́ımo v datech (zpravidla v hlavičce souboru) nebo v exterńım souboru. prvńı skupinu nejčastěji reprezentuje formát geotiff, druhou skupinu pak tzv. “world files“. formát geotiff je tvořen jediným souborem (tif), který obsahuje v hlavičce zároveň informaci o poloze rastru. “world files“ jsou malé textové soubory, které doprovázej́ı daný rastr. data jsou pak tvořena vždy dvojićı soubor̊u (tif+tfw, jpeg+jgw,. . . ). souřadnicové umı́stěńı je dáno 6 parametry (souřadnice x a y středu levého horńıho pixelu, velikost pixel̊u v osách x a y, stočeńı osy x a y). tyto parametry vlastně představuj́ı afinńı transformaci rastru. rotace v osách bývá zpravidla nulová, nebot’ práce s natočenými rastry je daleko náročněǰśı. proto jsou data při georeferencováńı přeukládána do nového rastru s pixely ve směru souřadnicových os. v mém př́ıpadě byly vytvořeny 2 sady georeferencovaných dat ii. vojenského mapováńı z územı́ čech. prvńı soubor vznikl ve spolupráci s ing. br̊unou (laboratoř geoinformatiky 1 http://www.env.cz 2 http://www.geolab.cz geinformatics fce ctu 2006 96 http://www.env.cz http://www.geolab.cz http://www.geolab.cz http://www.geolab.cz http://www.geolab.cz http://www.geolab.cz historické mapy v prostředí mapového serveru ujep v mostě3) a představuje rastry (tif s lzw kompreśı) s hustotou 200dpi. tato data byla georeferencována v software arcgis 9.0. druhý soubor byl vytvořen ve spolupráci s ing. doubravou (gepro, a.s.4) a představuje rastry (jpeg komprimované) s hustotou 400dpi. tato data bylo georeferencována v software kokeš. v obou př́ıpadech byla použita metoda “world files“. samotné georeferencováńı prob́ıhalo v obou př́ıpadech afinńı transformaćı na rohy mapových list̊u. souřadnice roh̊u mapových list̊u byly ze systému stabilńıho katastru (gusterberg) převedeny do systému s-jtsk doc. čadou (zču plzeň5), který odvodil globálńı transformačńı kĺıč. jak vyplývá z disertačńı práce ing. doubravy [5], přesněǰśı georeferencováńı u těchto map nemá smysl. odchylky transformovaných bod̊u (globálńı kĺıč versus afinńı transformace na rohy mapových list̊u) dosahuj́ı v maximálńıch hodnotách pouze několika metr̊u. mapový server v obecněǰśım pojet́ı lze mapový server chápat jako celou internetovou aplikaci, která slouž́ı k práci s prostorovými daty. v užš́ım pojet́ı mapovým serverem rozumı́me aplikaci, která umı́ zpracovat požadavky uživatele a je schopná vrátit určitý výřez zdrojových dat. dále budu pracovat s pojmem mapový server v užš́ım vymezeńı. existuje celá řada již naprogramovaných mapových server̊u. některé jsou komerčńı, některé svobodné (volně šǐritelné). každý odborńık si nav́ıc může naprogramovat vlastńı mapový server podle jeho představ. já jsem využil již existuj́ıćı řešeńı, konkrétně umn mapserver6, který patř́ı do kategorie svobodného software. umn mapserver7 může pracovat v prostřed́ı internetu bud’ pomoćı rozhrańı cgi nebo využit́ım knihovny mapscript (lze ji použ́ıt v řadě programovaćıch jazyk̊u – php, python, perl). já jsem se rozhodl pro použit́ı cgi mapserveru, ačkoliv bych rád v budoucnu vyzkoušel i možnosti mapscriptu. mapový server ve výsledné aplikaci generuje obrázky map (hlavńı mapové okno, referenčńı mapka), obrázek grafického měř́ıtka, atd. všechny tyto obrázky jsou generovány po odesláńı požadavku uživatele (kliknut́ı na ikonu, zaškrtnut́ı poĺıčka, posun mapy, atd.). mapový server je tedy jakýmsi jádrem aplikace, která umı́ pracovat se zdrojovými daty v souřadnicovém systému a umı́ generovat obrázky do internetové stránky. vı́ce o teorii fungováńı mapových server̊u lze nalézt v mých publikaćıch na 16. kartografické konferenci v brně [1], gis ostrava 2006 [2], nebo na juniorstavu 2006 [3]. internetová aplikace vlastńım uživatelským rozhrańım mapové aplikace je xhtml stránka. existuje v́ıce možnost́ı jak je stránka vytvářena. prvńı, jednodušš́ı možnost́ı je využ́ıt šablonový systém umn mapserveru8, který umožňuje do stránky př́ımo vkládat objekty vygenerované mapserverem. t́ımto zp̊usobem byla vytvořena jedna z ukázkových aplikaćı. celá internetová stránka je 3 http://www.geolab.cz 4 http://www.gepro.cz 5 http://gis.zcu.cz 6 http://mapserver.gis.umn.edu 7 http://mapserver.gis.umn.edu 8 http://mapserver.gis.umn.edu geinformatics fce ctu 2006 97 http://www.geolab.cz http://www.geolab.cz http://www.gepro.cz http://gis.zcu.cz http://mapserver.gis.umn.edu http://mapserver.gis.umn.edu http://mapserver.gis.umn.edu http://mapserver.gis.umn.edu historické mapy v prostředí mapového serveru pak html formulářem po jehož odesláńı dojde k novému vygenerováńı všech mapových obrázk̊u. ovládáńı mapy je vyřešeno pomoćı tlač́ıtek a přeṕınaćıch poĺıček. ukázka aplikace je na obrázku č.1. obrázek č.1: cgi mapserver druhá ukázková aplikace využ́ıvá nové možnosti javascriptu (programovaćı jazyk pracuj́ıćı v internetových prohĺıžeč́ıch). je zde využita metoda ajax (asynchronous javascript and xml), konkrétně v použité knihovně mscross. dı́ky této knihovně, která opět patř́ı do skupiny svobodného software, je možné mapu ovládat mnohem interaktivněji (změna výřezu mapy taženým obdélńıkem, posun mapy tažeńım). některé ovládaćı prvky z̊ustávaj́ı stejné jako v předchoźım př́ıpadě. některé daľśı doplňkové funkce jsem doprogramoval. jde zejména o odeč́ıtáńı kartografických souřadnic při pohybu myši nad mapou a dále o vyhledáváńı obćı v rámci čr. odeč́ıtáńı souřadnic je řešeno javascriptovými funkcemi, bohužel odděleně pro r̊uzné internetové prohĺıžeče (internet explorer má nestandardńı chováńı). vyhledáváńı obćı je pak založeno na spojeńı ajax (výpis obćı po zadáváńı jednotlivých ṕısmen) a php (obsloužeńı databázového požadavku). aplikace využ́ıvá databázi úir-zsj, kterou je možné źıskat z webu českého statistického úřadu. na serveru byla data migrována do volně šǐritelné databáze postgresql9. ukázka aplikace je na obrázku č.2. webové mapové služby pomoćı webových mapových služeb je možné poskytovat data jiným aplikaćım, př́ıpadně připojovat data z ostatńıch server̊u. umn mapserver10 podporuje webové mapové služby velmi dobře a proto neńı problémem připojit data z jiných server̊u, př́ıpadně data publikovat 9 http://www.postgresql.org 10 http://mapserver.gis.umn.edu geinformatics fce ctu 2006 98 http://www.postgresql.org http://mapserver.gis.umn.edu historické mapy v prostředí mapového serveru obrázek č.2: cgi mapserver + ajax pomoćı mapových služeb. vzhledem k tomu, že historické mapy představuj́ı rastrová data, má smysl použ́ıt pouze službu wms. v mých aplikaćıch jsou na ukázku připojena některá data z datového skladu úhúl brandýs nad labem. jedná se o vrstevnice a klad map smo-5. neńı problémem připojit libovolné daľśı vrstvy. zároveň jsou pomoćı wms služby distribuována data ii. vojenského mapováńı. podrobněji jsou webové mapové služby popsány např. v mé publikaci připravené na konferenci gicon 2006 ve vı́dni [4]. závěr mysĺım, že tento standardně popsaný postup pro publikováńı historických map na internetu umožńı snadný převod dat v r̊uzných mapových archivech. má práce podńıtila budoućı vybaveńı laboratoře digitálńı kartografie serverovým hardware (předpoklad v červnu 2006). na tomto serveru pak budou aplikace veřejně př́ıstupné. kromě samotných aplikaćı budou data poskytována pomoćı wms a kdokoliv si tak bude moci připojit do své aplikace tato data. v souvislosti s mou praćı bude do výuky na naš́ı fakultě zaveden nový předmět interaktivńı kartografie, jehož náplńı bude právě práce s mapovými servery. samotné internetové aplikace lze samozřejmě vylepšovat. rád bych aplikace doplnil přepočty souřadnic do hojně využ́ıvaného wgs-84. daľśı možnost́ı je implementovat určováńı kladu mapových list̊u podle pohybu myši nad mapovým oknem. v oblasti zdrojových dat bude třeba georeferencovat i zbylé oblasti čr (morava, slezsko). velice zaj́ımavé bude spojeńı těchto souřadnicových soustav do bezešvé mapy. stejným postupem jako ii. vojenské mapováńı bude možné zpracovat iii. vojenské mapováńı. otázkou z̊ustává georeferencováńı i. vojenského mapováńı, kde neńı možné použ́ıt roh̊u mapových list̊u. geinformatics fce ctu 2006 99 historické mapy v prostředí mapového serveru reference 1. cajthaml j.: využit́ı webových mapových server̊u. sborńık: 16. kartografická konference – mapa v informačńı společnosti, 7.-9.9.2005. ed.: václav talhofer, lucie friedmannová, alois hoffman. univerzita obrany, brno, čr, 2005. 91 stran (abstrakta), plné texty na cd. isbn 80-7231-015-1. 2. cajthaml j.: mapserver of the old maps. in: proceedings of international symposium gis ostrava, 23.-25.1. 2006. vsb technical university of ostrava, issn 1213-2454. 3. cajthaml j.: jak publikovat staré mapy na internetu? in: juniorstav 2006, sborńık konference, dı́l 8, geodézie a kartografie, brno 25.1.2006. vut v brně, 2006, isbn 80-214-3114-8. 4. cajthaml j.: old maps internet presentation overview of possibilities in: gicon 2006 geoinformation connecting societies, sborńık konference v tisku, vienna 10.14.7.2006. university of vienna, 2006. 5. doubrava p.: zpracováńı rastrových mapových podklad̊u pro využit́ı v oblasti aplikaćı gis a katastru nemovitost́ı. doktorská disertačńı práce, 156 stran. čvut v praze, 2005. 6. oficiálńı webové stránky projektu umn mapserver11 7. oficiálńı webové stránky projektu mscross12 8. oficiálńı webové stránky projektu postgresql13 11 http://mapserver.gis.umn.edu 12 http://datacrossing.crs4.it/en documentation mscross.html 13 http://www.postgresql.org geinformatics fce ctu 2006 100 http://mapserver.gis.umn.edu http://datacrossing.crs4.it/en_documentation_mscross.html http://www.postgresql.org geoinformatics fce ctu 12, 2014 61 testing of the possibilities of using imus with different types of movements pavol kajánek slovak university of technology, faculty of civil engineering, radlinského 11, 81368 bratislava, slovakia, e-mail: pavol.kajanek@stuba.sk abstract inertial navigation system (ins) is a self-contained navigation technique. its main purpose is to determinate the position and the trajectory of the object´s movement in space. this technique is well represented not only as a supplementary method (gps/ins integrated system) but as an autonomous system for navigation of vehicles and pedestrians, also. the aim of this paper is to design a test for low-cost inertial measurement units. the test results give us information about accuracy, which determine the possible use in indoor navigation or other applications. there are described some methods for processing the data obtained by inertial measurement units, which remove noise and improve accuracy of position and orientation. key words: inertial measurement unit, indoor navigation. 1. inertial measurement unit inertial measurement unit (imu) is a complete three dimensional dead reckoning navigation system. it includes a set of inertial sensors such as accelerometers, gyroscopes and magnetometers (for absolute orientation). these sensors are comprised to one orthogonal system, which is known as body frame. outputs of the gyroscope are angular rates, which are used for calculating attitude and outputs of the accelerometers are accelerations, which are used for the position determination ([1]). figure 1: body frame of inertial measurement unit a gyroscope is an inertial sensor for measuring angular rate. for the strapdown imu, three main types of gyroscopes are used, such as spinning mass gyroscope, optical gyroscope and vibratory gyroscope. spinning mass gyroscope works on principle of conversation of angular momentum. optical gyroscopes work on principle of sagnac effect. vibratory gyroscope operates on the principle of detecting coriolis acceleration of vibrating element, when gyroscope is rotating ([3]). kajánek, p.: testing the possibilities of using imus with different types of movements geoinformatics fce ctu 12, 2014 62 accelerometer is based on the second newton´s law of motion to measure acceleration � by the measuring force f, with the scaling constant m called proof mass ([3]). � � � � � . (1) for the strapdown systems the pendulous accelerometer and the vibrating-beam accelerometer are used. pendulous accelerometer has proof mass, which is attached to a hinge, which deflects when acceleration occurs. vibrating-beam accelerometer uses the piezoelectric effect. there is a pair of the quartz crystal beams mounted symmetrically. each of beam vibrates on own resonant frequency. when the acceleration is induced along the sensitive axis, one beam is compressed while the other is stretched or tensioned. acceleration is measured as difference in frequency. magnetometer sensors works on the principle of hall’s effect ([3]). these sensors transform magnetic field to output voltage difference. accelerometer and gyroscope for the low-cost imu are manufactured as micro-electromechanical sensors mems. mems are relatively small structures, which are manufactured using silicon or quartz. the advantages of mems sensors are small size, low weight, rugged construction, low power consumption, short start-up time, high reliability, low maintenance and compatibility with operation in hostile environments ([2]). currently, inertial measurement units have number of significant applications in surveying. imu are used as a complementary method for navigation using gnss, where the measurements made by inertial navigation system ins are used to interpolate the trajectory determined by using gnss. integrated gnss / ins system is benefiting from the ins advantages such as high data rate and good short-term accuracy. imu is often used for navigation of vehicles or pedestrian in indoor and outdoor environment. 2. mathematical model of data processing main aim of the navigation solution for a moving object (pedestrian or machine) is to determine its current position, velocity and orientation in reference coordinate system. in the following part of paper is described data processing of inertial measurements. 2.1 position determination for a moving object this process can be divided into four steps. these are the attitude update, the transformation of specific force resolving axes, the velocity update and the position update. approach of the processing of inertial measurements is shown in flowchart in figure 2. the first step is numerical integration for the euler angles (roll �, pitch � and yaw �) calculation from an angular rate, what is an output of gyroscopes. in the second step euler angles are used to transform the acceleration �� from body frame to acceleration �� in navigation frame. the body frame changes its orientation in space with respect to a fixed navigation frame. then we remove the force of gravity from the transformed acceleration. � �� �������������� � �� ������������ �� ���� � � �� �������� �� ���� �� �� ���� (2) �� � � �������� � �� � !, (3) where r is current rotation matrix, g is local gravity in yaw direction (9,80665 m/s2). the next step is numerical integration of acceleration �� and get the velocity v. after second we use second numerical integration to calculate the position "�from velocity. "� � "�#$ % & ���� �� ���� , (4) kajánek, p.: testing the possibilities of using imus with different types of movements geoinformatics fce ctu 12, 2014 63 where "� is current position, "�#$ is previous position, �� – is current acceleration in navigation frame, '� is current velocity, t epoch of measurement. the mathematical model mentioned above is often called as dead-reckoning algorithm ([4]). 2.2 methods for improving position and orientation accuracy the disadvantage of the low-cost imus is low accuracy of the position determination that decreases with time of navigation. low accuracy is caused by sensor errors that are cumulated by integration process. there are numbers of methods, which could help improve accuracy such as: • allan variance method ([6]), • zero-velocity-update algorithm zupt used for foot mounted navigation ([5]), • additional sensors: rfid, gps, compass ([7]), • specific information, which is obtained from maps: coordinates of door, direction of corridor and other ([8]). figure 2: flowchart of processing with inertial measurements in these methods for purpose of suppression of noise components in the signal during the data processing the kalman filter is often used ([1]). 2.3 imu testing for different types of movement with accuracy verification the application of sensors is strongly influenced by their accuracy, whereby the imus can be divided into several groups. the presented test concerns with testing low-cost imus, which due to low production costs, have lower accuracy compared to the imus designed for tactical applications. the designed tests are focused on the accuracy of the positioning using low-cost imus based on motion along a defined trajectory. since the imus accuracy decreases with time of navigation, short-term linear movement will be performed. further advanced movements of variable speed and orientation of motion (movement of pedestrians in the building) and also motion of imu on testing cart (no vibrations with high amplitude caused by holding the imus in the hands of the pedestrian) can be used. based on the proposed test two low-cost sensors (inemo steval v2 mki062v2 and imu eeas) will be tested. the first test will be simulated simple and short-time movement of pedestrian. tested imus will be positioned on the pedestrian´s hand. there will be realized straight movement of testing device between points with known position (e.g. surveying pillars). aim of the first test is testing of the accuracy of the imus in short-time term. acceleration transformation of accelerations from body frame to navigation frame integratio n acceleratio n in navigation frame euler angles: roll, pitch, yaw remove gravity [0; 0; -g] velocity position integration integration acceleration acceleration angular rate angular rate z angular rate kajánek, p.: testing the possibilities of using imus with different types of movements geoinformatics fce ctu 12, 2014 64 figure 3: tested devices: inemo steval v2 mki062v2 (left) and imu eeas (right) in second test there will be simulated long-term advanced movement (standard pedestrian movement with random changes of acceleration and orientation). imus will be placed on the pedestrian´s foot and pedestrian´s hand. the aim of the second experiment is testing imus for indoor application, where the movement of pedestrians is more difficult (random changes of acceleration and orientation) and the inaccuracy grows with the traveled distance. trajectory of movement will be limited by floor plan, where coordinated of passages (doors – red circle in figure 4) are known. figure 4: fix points as specific information in floor plan in the third test imus will be placed on testing cart figure 4. the purpose of this test is simulation of machine movement, where there are no vibrations with high amplitude caused by holding the imus in the hands of the pedestrian and the movement of the cart is more-les straight. figure 5: imu eeas placed on cart 3. design of data processing there will be used dead reckoning algorithm (chapter 3.1) to calculate position, velocity and attitude from output of imus. unfortunately, there are errors in accelerometer and gyroscope measurements, which are cumulated in integration process. random accelerometer and gyro noise kajánek, p.: testing the possibilities of using imus with different types of movements geoinformatics fce ctu 12, 2014 65 have cumulative effect too. these facts cause, that the accuracy of the trajectory decrease significantly over the time. an important part of the data processing will be the suppressing of the noise in the measurements. before the integration process low pass filter will be used to suppress high-frequency components which contains the noise. next step will be integration using fast fourier transform (fft). fft allows transform of the measured signal from time domain to frequency domain. in frequency domain noise frequencies will be removed and after that inverse fft will be used to convert signal back to the time domain. then there will be used double numerical integration to calculate position from signal without noise ([9]). figure 6: scheme of integration with using fast fourier transform 4. conclusion this paper describes exploitation of low-cost inertial measurement units for indoor applications. the main aim of the paper is to design test of imus for different types of movements (short-term linear movement, advanced long-term movement and movement on the testing cart), which represent typical situations in indoor navigation. the results of experiments give information about accuracy of tested imus, which is limiting factor for their possible application in pedestrian navigation, navigation of machines in buildings, etc. the paper also describes the data processing and methods, which could help to improve the accuracy of position and orientation. in future work we will test two imus (inemo steval v2 mki062v2 and imu eeas), where will use proposed test. test result give informations about possibility of using tested imus in indoor navigation. acknowledgement "this publication was supported by competence center for smart technologies for electronics and informatics systems and services, itms 26240220072, funded by the research & development operational programme from the erdf." references [1] groves, p.d. 2008. gnss technology and aplications series. principles of gnss, inertial and multisensor integrated navigation systems. london: artech house. 2008. 552 p. isbn-13:978-1-58053-255-6 [2] farrel j. a. 2008. aided navigation gps with high rate sensors. usa: the mcgraw-hill companies. 2008. 553 p. doi: 10.1036/0071493298 [3] titterton, d. 2004. strapdown inertial navigation. cornwall: mpk books limited, 2004. 510 p. isbn 0-86341-358-7 acceleration fast fourier transform filtering noise inverse fast fourier transform velocity fast fourier transform filtering noise inverse fast fourier transform position kajánek, p.: testing the possibilities of using imus with different types of movements geoinformatics fce ctu 12, 2014 66 [4] foxlin e. pedestrian tracking with shoe-mounted inertial sensors, ieee computer graphics and applications, no. december, pp. 38–46, 2005 [5] colomar, s. d. step-wise smoothing of zupt-aided ins. 2012. phd thesis. kth. [6] el-sheimy, n., hou, h. and niu, x. analysis and modeling of inertial sensors using allan variance. in: instrumentation and measurement, ieee transactions , 2008, 57(1), p.140-149. [7] ruiz, a. r. j. et al. pedestrian indoor navigation by aiding a foot-mounted imu with rfid signal strength measurements. in: indoor positioning and indoor navigation (ipin), 2010 international conference on. ieee, 2010. p. 1-7. [8] jiménez, a. r., et al. improved heuristic drift elimination (ihde) for pedestrian navigation in complex buildings. in: indoor positioning and indoor navigation (ipin), 2011 international conference on. ieee, 2011. p. 1-8. [9] slifka, l. an accelerometer based approach to measuring displacement of a vehicle body. master of science in engineering, department of electrical and computer engineering, university of michigan–dearborn, 2004. development and testing of inspire themes addresses (ad) and administrative units (au) managed by cosmc michal med, petr souček the czech office for surveying, mapping and cadastre (cosmc) prague, the czech republic michal.med@cuzk.cz, petr.soucek@cuzk.cz abstract main content of this article is to describe implementing inspire themes addresses and administrative units in czech republic. themes were implemented by czech office for surveying, mapping and cadastre. implementation contains developing gml files with data and designing its structure, developing and testing of inspire services and preparing metadata for data and services. besides harmonised inspire themes cosmc manages also non-harmonised themes cadastral map (km) and units extended (ux). keywords: inspire, cadastre, addresses, cadastral parcels, administrative units, buildings, metadata, rúian, services, wms, wfs, gml 1. introduction inspire – infrastructure for spatial information in europe is a directive of european commission and council, which was transposed into czech legislation in 2009 by the law number 380/2009 col., which amends laws number 123/1998 col., on the right to information about environment and number 200/1994 col., about surveying. figure 1: inspire logo from the law number 123/1998 col. come (among others) following duties: • create and manage metadata for spatial data files and for network services, geoinformatics fce ctu 11, 2013 77 med, m., souček, p.: development and testing of inspire themes . . . • harmonise spatial data sets according to the directive, • create interoperable network services. important part of implementation is also giving information on implementation to public. all basic information about implementation of data and services are available at geoportal cosmc1 in czech and in english in a bookmark inspire. pages for themes cadastral parcels (cp), addresses (ad) and administrative units (au) have a special look and structure. it was designed for better intelligibility of data and services for users. from the geoportal, there is also possibility of downloading data and access network services. data, services and informations are available on national inspire geoportal2, administered by cenia, czech environmental agency. figure 2: cosmc geoportal this geoportal should collect all datasets relevant to inspire including services and metadata. unfornutelly, at least in my opinion, it’s used more like trash can for all data sets which contain some part of data even distantly similar to those relevant to inspire. searching of data and series is mediated through inspire discovery services. discovery services are searching in metadata, specifically in keyword elements. every provider can write into keywords anything he wants. that could be, and is, a problem. for accesing all data and services managed by cosmc, either by section of central database or by surveying office, i conclusively recommend using the geoportal cosmc. 2. implementation during implementation of inspire themes addresses and administrative units, datasets and services harmonised by implementation rules of inspire directive were designed, developed 1http://geoportal.cuzk.cz/ 2http://geoportal.gov.cz/ geoinformatics fce ctu 11, 2013 78 http://geoportal.cuzk.cz/ http://geoportal.gov.cz/ med, m., souček, p.: development and testing of inspire themes . . . and tested. technical guidances for services and data specifications were used during the implementation. next step was making of metadata records. metadata records serve as a description of data or services, not only human readible, but primarily computer readable. i am personally engaged in a process of implementation since making of metadata records for data of the theme cadastral parcels. themes addresses and administrative units were implemented from the beggining to the very end with my participation. during implementation of themes addresses and administrative units, metadata and data of the theme cadastral parcels were revised. this implementation took place in a few steps in the following order: • analysis of data specifications and technical guidances inspire, • analysis of data in databases of cosmc, • design of a data files structure, • design of supported operations and planned limits for view and download services, • creating of metadata records, • testing and analysis of prepared data and services, • revision of data files and services, • revision of metadata, • creating of promotional materials, • publishing of data, services and metadata on the web http://services.cuzk.cz/, • publishing of promotional materials on the web http://geoportal.cuzk.cz/. in the future, section of central database is going to continue in the implementation of inspire directive with the theme buildings (bu). concurrently with implementation of next inspire theme, revisiones of already done themes are taking place in legislation. revisions are based on experience and users feedback. 2.1. data preparation of data is based on data specifications on themes. preparation of data is devided into three phases. in the first one, i have studied data specifications on addresses and administrative units. second step was to analyse corresponding data in cosmc databases3. during the analysis it’s necessary to decide which data from database are suitable to data structure according to the specification. for that purpose, i have made schemes of usage. in the third phase i have prepared sample file in gml 3.2.1 format. the sample file for each theme was sent to the firm geovap, the developer of software marushka®, which mediates generating of predefined gml files according to sample file for each theme. the basic dispensing unit is different for each theme. for the theme cadastral parcels, there is one predefined file for each cadastral zoning. addresses have one file for each municipality and all data for the theme administrative units are distributed in only one file for the whole 3iskn – information system of cadastre of real estates, isúi – information system of territorrial identification, zabaged – fundamental base of geographic data geoinformatics fce ctu 11, 2013 79 http://services.cuzk.cz/ http://geoportal.cuzk.cz/ med, m., souček, p.: development and testing of inspire themes . . . figure 3: uml diagram of application scheme on theme addresses czech republic. predefined files are generated daily and are available for free on the page http://services.cuzk.cz/gml/inspire. marushka® software, besides providing of predefined files, also mediates inspire harmonised download and view services according to the inspire technical guidance for services. these services are realised through ogc and iso standards about wms 1.3.0 and wfs 2.0.0. 2.2. services according to the inspire directive there is five types of services, which has to be provided for rightful implementation. these services shall be implemented: • discovery services – allow to search for data ad services according to keywords in metadata, • view services – allow viewing data through web mapping services in version 1.3.0, • download services – allow donwloading data through web feature service in version 3.0 or through predefined gml files, • trensformation services – allow transformation of spatial data, • startup services – allow access for other types of services. from the inspire implementations point of view i was especially interested about implementation of download and view services, which allow direct access to the data. data are continually updated in publication database. sources of the data of publication database geoinformatics fce ctu 11, 2013 80 http://services.cuzk.cz/gml/inspire med, m., souček, p.: development and testing of inspire themes . . . are isúi and iskn. data are essentially current, as the age of data two hours are featured. figure 4: process of managing data from databases isúi and iskn for predefined gml files and wms and wfs services source: ing. petr souček, ph.d. inspire view services are realised through web mapping service 1.3.0. besides this version, older version 1.1.1 is also supported, but not forced by inspire directive. access point for service is web address according to this model: http://services.cuzk.cz/wms/inspire[theme]-wms.asp?. for example, view service for the theme addresses has acces point on address http://services.cuzk.cz/wms/inspire-ad-wms.asp?. in order to simplify accessing data i have created a set of guidelines for using wms services. ther is one document for each theme: • http://services.cuzk.cz/doc/inspire-ad-view.pdf – for addresses, • http://services.cuzk.cz/doc/inspire-au-view.pdf – for administrative units, • http://services.cuzk.cz/doc/inspire-cp-view.pdf – for parcels, which contains list of available layers, supported coordinate systems and samples of requests. download services are realised according to technical guidelines via wfs in version 2.0.0 and through predefined data files in gml format. older versions of wfs aren’t supported. problem is, that wfs 2.0.0 is not supported by most software. only one i know about, that supports this service is qgis. access is mediated through plugin wfs 2.0 written by jürgen weichand. manual on downloading and using this plugin, including basic examples of working with it, is to found on this page: http://services.cuzk.cz/doc/manual-wfs20-qgis.pdf. access point for web feature service is web page according to the model http://services. cuzk.cz/wfs/inspire-[theme]-wfs.asp?. here’s an example for addresses: http://services. cuzk.cz/wfs/inspire-ad-wfs.asp?. same as for wms, for wfs i have created manuals too. they contain information about structure of data available through web feature service and about the usage of this service. there is one document for each theme at the following addresses: • http://services.cuzk.cz/doc/inspire-ad-download.pdf – for addresses, • http://services.cuzk.cz/doc/inspire-au-download.pdf – for administrative units, geoinformatics fce ctu 11, 2013 81 http://services.cuzk.cz/wms/inspire-ad-wms.asp? http://services.cuzk.cz/doc/inspire-ad-view.pdf http://services.cuzk.cz/doc/inspire-au-view.pdf http://services.cuzk.cz/doc/inspire-cp-view.pdf http://services.cuzk.cz/doc/manual-wfs20-qgis.pdf http://services.cuzk.cz/wfs/inspire-ad-wfs.asp? http://services.cuzk.cz/wfs/inspire-ad-wfs.asp? http://services.cuzk.cz/doc/inspire-ad-download.pdf http://services.cuzk.cz/doc/inspire-au-download.pdf med, m., souček, p.: development and testing of inspire themes . . . • http://services.cuzk.cz/doc/inspire-cp-download.pdf – for parcels. besides on-line access to data there is also a possibility to get a data through predefined gml files as described before. 2.3. metadata metadata harmonised to inspire has to follow technical guideline for metadata. its newest version (1.3) has been released on the 6th of november 2013. metadata published by cosmc within inspire is possible to divide into two parts. first one could be called "static", second one "dynamic". static metadata include metadata for series of inspire datasets, metadata for inspire download services and metadata for inspire view services. dynamic metadata include getcapabilities documents for wms and wfs, getfeatureinfo document, describestoredqueries and other documents relative to network services. as the inspire harmonised metadata are considered all metadata from the first category. technical guideline for metadata comes from technical norms iso 19115 and iso 19119 and national metadata profile and metadata profile of cosmc also follow these norms. metadata profile of cosmc includes everything what is required by inspire technical guidelines and national metadata profile and even more. therefore i have used cosmc profile while i was creating metadata for inspire themes. all metadata has an identifier, which is unique in the scope of cosmc namespace. combination of an identifier and namespace identifies metada record uniquely in the scope of the whole inspire. metadata describe service or metadata they are attached to. besides description info they contain keywords. keywords serves for discovering products through inspire discovery services. every metadata record has a keyword according to gemet thesaurus. for the data metadata, gemet keyword serves as an identifier of the inspire theme. services metadata have an additional gemet keyword which serves as an identifier of the type of inspire service. other keywords should come from vocabulary of cosmc, but not all inspire related keywords are included. currently we have initialized negotiations with terminological commision about adding new keywords to the vocabulary. most of them are related to inspire and basic registers. metadata also include information about data quality and its testing. for data and services, only tests used were on inspire consistency and data completeness. 3. what’s next? by publishing data, metadata and services, implementation of inspire isn’t done yet. we have found a lot of mistakes and comments during implementation and i believe that so did most of european developers and analysts working on implementation of inspire. thats a reason why maintanatce and implementation group (mig) and pool of experts were founded. ing. jiří poláček, csc. is mig member and both authors of this article are members of pool of experts. implementation of inspire is moving from the opening phase into the maintanance phase. within improvement of inspire data and services it’s really important users’ feedback and geoinformatics fce ctu 11, 2013 82 http://services.cuzk.cz/doc/inspire-cp-download.pdf med, m., souček, p.: development and testing of inspire themes . . . continual development of data„ metadata and services. during my work on addresses and administrative units i have revised metadata for cadastral parcels, which were published more than a year ago. in the same time, interoperability of data and services between neighbour countries is going to be tested. czech data and services are now tested together with slovaks and cooperation with other neighbour states will follow. references [1] poláček, j. souček, p.: implementing inspire for the czech cadastre of real estates, geoinformatics fce ctu 8, 2012, pp. 9–16. [online] [cited: december 27, 2013.] http: //geoinformatics.fsv.cvut.cz/pdf/geoinformatics-fce-ctu-2012-08.pdf geoinformatics fce ctu 11, 2013 83 http://geoinformatics.fsv.cvut.cz/pdf/geoinformatics-fce-ctu-2012-08.pdf http://geoinformatics.fsv.cvut.cz/pdf/geoinformatics-fce-ctu-2012-08.pdf geoinformatics fce ctu 12, 2014 4 stability determination of the surface area of the prague castle by the periodically measured levelling network and robust analysis martin štroner, rudolf urban, tomáš kubín czech technical university in prague, faculty of civil engineering, thákurova 7, 166 29 prague, czech republic, e-mail: martin.stroner@fsv.cvut.cz, rudolf.urban@fsv.cvut.cz, tomas.kubin@fsv.cvut.cz. abstract in the area of prague castle there is already about 10 years of periodical measurement of the height changes in the buildings and structures being performed. the measurement method is precise levelling predominantly. until now, these measurements have been evaluated only locally with respect to each building and its stability without an overall view of the situation of possible movements of individual parts of the surface prague castle. whereas there are height shift of some points between epochs undoubtedly, a new and complete adjustment of each measured epoch and mutual assessment of changes between epochs using robust analysis was conducted. this comparison shows the relative movement of certain parts against another. the results are consistent with current knowledge of the geology in the area of the prague castle. keywords: robust analysis, deformation analysis, precise levelling, prague castle 1. introduction prague castle is one of the most important historical, political and tourist areas of the czech republic, since 1918 also the seat of the president of the czech republic. according to [1], the prague castle complex was created by sequential additions and renovations of the settlement founded in the 9th century. with its dimensions of 570 m length and 128 m width it is one of the largest castle in the world. it is considered to be not only symbol of the city, but also the czech statehood. historic buildings located in the area are however affected by the aging process and the effects of changes in the surroundings. in order to predict further developments in this area, the long term periodic measurements for determining the stability of historic buildings in the area of the prague castle are carried out. geology in the area plays major role, according to [2] it was originally not complicated, but anthropogenic activities related to structural modifications of hrad�any hill during the last centuries made it considerably more complicated. the bedrock of the area has been reworked and expanded by the fills of different origin. the first measuring was conducted by the department of special geodesy in 1999, since then it is still ongoing and have been supported by several grants. the findings and conclusions of the measurements were summarized in [3]. these measurements were initially concentrated on the fault monitoring of individual buildings, and later connected via a network of reference points for both height measurements (precise levelling) and the position measurement. but because of this nonsystematic evolution of the measurement there are differences between epochs in configuration of the network and of monitored points, according to actual demand. so far these measurements were evaluated only locally with respect to each building and it’s stability without an overall view of the situation of possible movements of individual parts of the surface of the prague castle area. štroner, m. et al: stability determination of the surface area of the prague … geoinformatics fce ctu 12, 2014 5 a comprehensive evaluation of the height shifts is further discussed in this article.2. geodetic measurements at prague castle geodetic measurements were in the area of prague castle carried out in various range since its construction, but the periodic monitoring of selected historic buildings and slope stability is a matter of the last 10 years. there are changes monitored in tilt and relative height in individual buildings and areas. for analysis only height measurement was chosen. the reason for this decision is a very high and long-term achieved precision of 0.7 mm / km, also the high reliability and resistance of the method to systematic errors. height measurements are almost entirely a matter of method levelling from the center with the addition of precise trigonometric method used to bridge the jelení p�íkop (deer moat). scheme of the performed measurements is on figure 1. measurements were conducted with use of the zeiss koni 007 instrument, and in the last two epochs the digital levelling instrument trimble dini 12t was used. figure 1: scheme of the height geodetic measurements in the prague castle area 3. new calculation of epoch levelling measurements overall, there were 18 epochs re-processed and re-adjusted, all of them measured between years 2004 and 2012. original intention of the measurements was not to carry out assessments, but monitoring of individual objects. monitored objects changed during the years and in different epochs unequal sets of points were measured. because of the relative solution of all monitoring, the measurement onto the stable points outside the prague castle area was not performed, and therefore none of the points can be considered to be stable. processing was made with regard to these facts in epochs by the least squares adjustment, in the gnu gama software (gama-local ver. 1.13, more in [4]). as the measured values served averaged height differences measured back and forth. the a priori standard deviation was chosen to be �0 = 0.7 mm, measurements were conducted predominantly by the zeiss koni 007 instrument. štroner, m. et al: stability determination of the surface area of the prague … geoinformatics fce ctu 12, 2014 6 the results of the new adjustment are relative heights of points at each epoch. for further analysis, a standard deviation of height of one point is assumed to be �p = 0.36 mm as the average of standard deviations of all points in all epochs. to identify the shifts between the epochs, the height difference between the epochs should exceed �h = up��2��p = 1.0 mm for the 95% (up = 2) or 1.3 mm for the probability of 99% (up = 2.5). 4. calculation of robust analysis the results of the adjustment are relative heights of points at each epoch. it is not possible to consider any of the points to be stable, therefore the transformation with redundant measurements was chosen for the analysis and calculated with use of the robust estimation, which is highly resistant against the outlaying (here shifted) values. 4.1 the basic principle of a robust calculation robust adjustment methods are mostly based on the principle of maximum likelihood method and their basic property is (compared to in geodesy widely used the least squares method) high resistance against the influence (against) of outlying measurements. the principles and derivation of least squares and robust methods can be found in [5]. most practically usable robust methods are based on adjusting of the weights in the calculation method of least squares (reweighting), such a calculation is then relatively easy. methods are presented in [5] too. the calculation procedure of iterative adjustment is based on the assembly of normal equations in the form: l'papadxa tt = , (1) ( ) l'papaadx tt 1−= , (2) where a is jacobi matrix, p diagonal matrix of weights (on the diagonal are measurements’ weights pii = k/σi2), dx increment vector of unknowns, l’ vector of reduced measurements. robust weight change: ( ) l'pwapwaadx tt 1−= , (3) where robust weight change is determined by the equation: ( ) n wwwdiag ,...,, 21=w ; ( )iii vfw σ,= , (4) where corrections are determined by the equation l'adxv −= . (5) robust changes are derived from the standard deviations of measurement and corrections obtained in adjustment. various methods of calculating of the changes in weights can be used, derived on the basis of expected probability distribution of deviations from the normal distribution. here it is worth mentioning huber method (described in [5]). when creating a robust estimator huber came out from the normal random variable distribution. his solution is based on the replacement of the edge parts of the normal probability distribution by the laplace distribution (a special form of the exponential distribution), which leads to greater probability of outlaying measurement on the distribution’s edges. for purposes of the analysis, a l1 norm was used, which, as a function of the probability distribution, uses directly laplace distribution, which has in comparison to a normal distribution štroner, m. et al: stability determination of the surface area of the prague … geoinformatics fce ctu 12, 2014 7 a greater probability of outlaying measurements occurrence. for homogeneous measurement (measurements with the same standard deviation) is a robust weight change given by the next function and there is no need to know the standard deviation. ||/1 vw i = (6) the calculation is done iteratively, corrections used to calculate robust weights’ changes are always used from a previous calculation. more to the calculation procedure is in [5]. 4.2 the procedure of calculation using the l1 norm the individual epochs were adjusted and for determining the points, where between two epochs i, j were shifts, it is necessary to transform the matching points of the epoch j to epoch i. the equations of linear transformation: tmrxx ji += , (7) where xi, xj are vector of the coordinates, m matrix of the scale coefficients, r rotation matrix, t translation vector. there is only a one-dimensional transformation (only heights) needed, scale between the epochs does not change and therefore the transformation equation for heights h between epochs i and j degrades as follows: ji,ji thh += . (8) when calculating the relationship between the two epochs, it is determined only by height shift ti,j, and there is the average height difference between the epochs: n hh t n k jkik ji � = − = 1 ,, , . (9) ideally, this shift will exactly suit to all points, though practically it does not, and therefore for every point n = 1 .. k can be calculated corrections: ( ) jijninn thhv ,,, +−= . (10) these corrections contain a component of measurement inaccuracy, and if there was a height change, this influence too. mean as a method corresponds with the least squares method, and in the case of outlaying measurements, here shifted points, fails to give proper results. for these reasons, it is advisable to use a robust method, which does not have such a property. the height difference between the two epochs is determined by an iterative calculation of the weighted average, where the weights are calculated on the basis of corrections from previous calculation (m-th iteration). ( ) ( ) � � = =+ − = n k k m n k k m jkik ji m w whh t 1 )( 1 )( ,, , 1 . (11) the individual epochs were not measured at regular time intervals and also measured points changed, so the procedure has been used where at the selected epoch (namely 10) were gradually transformed all the others. as the reference epoch was chosen epoch 10, because most points were štroner, m. et al: stability determination of the surface area of the prague … geoinformatics fce ctu 12, 2014 8 measured in this epoch, both in initial and especially in the terminal epochs. the calculated shift ti, j is not important, significant are individual corrections signalling shifts of the point between the epochs. 4.3 calculation results the calculation results are determined corrections after the transformation, which can be interpreted as deviations of individual points from the common state from a common level. figure 2: example of the relative points’ shifts when plotted on a graph, these corrections can give an idea of the movement of individual point between epochs. because of the large number of points it is not possible to show all of it, an example is on the figure 2. zero shift means, that point was not measured in the epoch. on figure 2 there are characteristic points marked by the arrow characterizing its’ relative shift, grey dots marks points considered to be stable. figure 3: relative points shifts štroner, m. et al: stability determination of the surface area of the prague … geoinformatics fce ctu 12, 2014 9 5. conclusions as a result of the new evaluation of the deformation measurements at the prague castle, the scheme of relative shift points was created, which is shown in figure 2. a new methodology was used for processing the results, which involves the use of robust estimation, namely the l1 norm. the results are consistent with observed phenomena in the field and so the presented methodology can be considered to be appropriate. acknowledgements the article was written with support of the internal grant of czech technical university in prague sgs14 “optimization of acquisition and processing of 3d data for purpose of engineering surveying“. references [1] prague castle, wikipeadia, cit. 12.1.2014. http://en.wikipedia.org/wiki/prague_castle [2] záleský, j. – chamra, s.: optimalizácia geotechnických konštrukcií: projekt sledování technického stavu historických budov. in: optimalizácia geotechnických konštrukcií, 18. – 19. zá�í 2001, svf stu bratislava, slovenská republika, isbn 80-2271545-x. s. 337–341 [3] procházka, j. ji�ikovský, t. záleský, j. et al.: stabilita historických objekt�. praha: eské vysoké u�ení technické v praze, 2011. 229 s. isbn 978-80-01-04776-7. [4] epek, a.: gnu gama 1.9 adjustment in geodetic networks. edition 0.19, 2005. [5] štroner, m. hampacher, m.: zpracování a analýza m �ení v inženýrské geodézii. 1. vyd. praha: ctu publishing house, 2011. 313 s. isbn 978-80-01-04900-6. geoinformatics fce ctu 12, 2014 16 geodetic measurement of longitudinal displacements of the railway bridge jaroslav braun, martin štroner czech technical university in prague, faculty of civil engineering, department of special geodesy, thákurova 7, 16629 praha 6, czech republic, e-mail: jaroslav.braun@fsv.cvut.cz, martin.stroner@fsv.cvut.cz abstract the paper deals with geodetic measurements of mutual longitudinal displacements of construction of the railway bridge and rails on the bridge in klášterec nad oh�í. construction of the bridge is made of steel with a concrete deck, which carries the stone superstructure and rails. the bridge is about 100 meters long and expected deformations are in millimetres. the method of geodetic network with the expected standard deviations of coordinates about 0.2 mm was chosen. the deformation of the structure was determined to be 4 mm, the deformation of the rails was determined to be 1 mm, both as a result of epoch comparison. key words: bridge, geodetic network, longitudinal shift, railway, standard deviation, track. 1. introduction buildings and constructions are constantly changing by the influence of time, temperature and environment changes, there occur shifts and deformations. for bridge structures, these changes are most obvious and frequently monitored, as they may affect the functionality and safety of the structure. depending on the type and use of the bridge is chosen measurement procedure, which can continuously monitor deformations of the structure [1] or long-term changes resulting from the change of seasons. for the measurement of displacements and deformations of bridges are used special procedures of engineering surveying, which are applied, as mechanical engineering equipment, high-precision levelling [2], laser scanning [3] or precise geodetic networks. changes of the structure can be observed by these methods with high precision, with standard deviation often less than 1 mm. department of special geodesy and department of railway structures, faculty of civil engineering ctu in prague perform monitoring of the railway bridge near klášterec nad oh�í in north-western bohemia. on the structure are observed deformations resulting from the season changes and temperature changes. main attention is focused on the longitudinal changes of the deck and rails, and determination of their dependencies. due to the expected changes in the position of points on the rails about 1 mm, the accuracy of analysis before measurement sets a requirement to determine the coordinates of points in the longitudinal direction with a standard deviation no larger than 0.2 mm. to achieve this accuracy was designed local geodetic network with 10 free stations and 56 observed points. since this is an active railroad track with minimal space around tracks, there had to be set up procedures that ensure the required high accuracy, measurement speed, and safety of workers. 2. description of bridge construction monitored bridge structure is located on the railway line between klášterec nad ohri and pernštejn in north-western bohemia. the bridge is 8 m above the river oh�e and is supported at braun, j. et al: geodetic measurement of longitudinal displacements... geoinformatics fce ctu 12, 2014 17 3 points (figure 1). the pillar 1 is attached firmly and on pillars 2 and 3 are movable pot bearings. the supporting steel structure is a lattice beamed with reinforced concrete bridge deck (figure 3). the bridge deck is 85 m long and 10 m wide. on the bridge deck is continuous bed of gravel, which stores 2 tracks (straight line). gravel is not linked directly to the rails, and therefore the strain of the deck should not influence the strain of the rails. monitored points were determined on the second track (101-119, 201-219) and on the deck (1002-1013) by the department of railway structures. points are stabilized by the reflective foils and are glued under the rail head and on deck are glued near the bottom of metal beams of the noise barrier (figure 2). on the edges of expansion joints between the deck and the pillars are glued points with a center punch (5001, 5002, 6001, 6002) to measure changes in the expansion joint by the mechanical engineering gauge (figure 2). to geodetic network points were added point 1001 and 1014, stabilized by the reflective foil on the pillars of traction lines. figure 1: scheme of bridge construction and points’ placement figure 2: stabilization of points 3. a priori accuracy analysis to achieve the demanded accuracy, the configuration of geodetic network and measurement accuracy was designed with use of the precisplanner3d software [4]. there was proposed successive measurement from 10 standpoints (4001-4010), which were chosen to be as far as possible from monitored and measured points, thus on the southeast side of the bridge deck. it was designed, that from each standpoint will be measured 4 connecting points (1001, 1014, 5001 a 5002), furthermore at least two points on the deck on each side of the position of the total station braun, j. et al: geodetic measurement of longitudinal displacements... geoinformatics fce ctu 12, 2014 18 and at least three sections on rails on each side of the instrument position. mutual measurement among standpoints is not assumed. this configuration should ensure that each observed point will be measured from at least three standpoints and there will be achieved favourable angles of intersection. when designing the required measurement accuracies it was considered, that targeting on the nearby points will be difficult and it needs an extra care. for direction measuring was chosen standard deviation 1.0 mgon and for slope distances 1.0 mm. the points on the rails were considered only with angular measurements, because the distance measurement may not be possible at all points due to considerably not perpendicular lines of sight against the target reflective foil. based on these assumptions and the chosen layout of standpoints have been calculated the expected standard deviation of observed coordinates of points which in the longitudinal direction was 0.2 mm. figure 3: railway bridge in klášterec nad oh�í 4. equipment for this precise measurement the best available equipment available at the department of special geodesy, faculty of civil engineering ctu in prague was used, namely total station trimble s6 hp (σϕ = σζ = 0,3 mgon, σd = 1 mm + 1 ppm· d). it was also used leica prism gmp111 for signaling points 5001, 5002, 6001 and 6002 and a digital thermometer and a barometer greisinger for implementation of physical reduction. 5. description of measurement in a priori accuracy analysis was proposed sequential measurement from 10 standpoints and the required measurement accuracy of 1.0 mgon is achieved by the selected total station trimble s6 by the measurements in one round. because the measurement is done directly on a bridge when a train passes interruption of the measurement is necessary. when a train passes on track 2, the measurement is interrupted, when a train passes on track 1, it is necessary to leave the bridge. also there has to be chosen a special procedure for measurement to make it faster and more flexible. at each point is the i. and ii. face measured consecutively. the first are measured connecting points (1001, 1014, 5001, and 5002), followed by points on the deck and then points on the rails. in case the measurement is interrupted then the standpoint is measured again, all the connecting points and points on the deck are measured again, and then are measured remaining points on the rails, which were not measured before the interruption. after the completion of all points is required to make a check measurement at least two measured points (usually 1001 and 1014) to verify the stability of standpoint. when measuring distances on foils (points on rails) there was a problem that the angle of incidence was too wide and the instrument was not able to measure in prism mode, so the distances were measured in non-prism mode, but measured distance was not used in the calculation because of the higher standard deviation (σdnp = 3 mm + 2 ppm· d). braun, j. et al: geodetic measurement of longitudinal displacements... geoinformatics fce ctu 12, 2014 19 5.1 epochs of measurement the measurement was carried out so far in three stages under different temperature conditions. dates of the epochs were chosen with regard to the long-term same temperature conditions due to stabilization of the structure. 1st epoch: 22. 8. 2013 (8:40 – 14:00), 14�c 28�c, cloudy with a gradual clarification, number of measurements: 708. 2nd epoch: 19. 9. 2013 (8:40 – 12:45), 9�c 13�c, cloudy with shift to scattered clouds, number of measurements: 711. 3rd epoch: 14. 11. 2013 (9:10 – 13:30), 1�c 4�c, cloudy, number of measurements: 780. 6. calculations each of the epochs was calculated separately and adjusted by the least squares method in gnu gama [5] software with standard deviation 1.0 mm for slope distances and 1.0 mgon for horizontal directions and zenith angles. slope distances were used only those, where difference between values measured in the i. and ii. face was smaller than 3 mm. bigger differences could be caused mainly by very slant lines of the sight. 6.1 evaluation of results of the adjustment after alignment is generally appropriate to assess whether the accuracy achieved equivalent accuracy planned before measuring. testing was performed using the permissible sample standard deviation (according to [6]). permissible sample standard deviation sm and a posteriori standard deviation s0: � � � � � � � � ′ +⋅= n s m 2 10σ n s t ′ ⋅⋅ = vpv 0 , p = diag(p1, p2, …, pi, …, pn), 2 2 0 i i p σ σ = , (1) where σ0 is a priori standard deviation used for weight calculation, n’ is number of redundant measurements, v is a vector of corrections after the adjustment, p is (here) diagonal matrix of weights. 6. 2 results of epochs adjustment a priori standard deviation was chosen to be 1.0 and there was about 400 redundant measurements in each epoch (sm = 1,07). a posteriori standard deviation was 1.03 in the first epoch, 0.87 in the second and 1.06 in the third one. in all epochs was achieved requested standard deviation of the adjusted coordinates better or equal to 0.2 mm in longitudinal direction. 6.3 check measurement changes of expansion joints, distances between points 5001 – 6001 and 5002 6002 were in each epoch measured by the mechanical engineering gauge. to independently control the results of the geodetic measurement were in the third epoch extra measured points 6001 and 6002 to allow confrontation of geodetic and gauge measurement. distances calculated from the adjusted coordinates were compared to directly measured distances (by the gauge). difference was 0.7 mm for the distance 5001 – 6001 and 0.1 mm for the distance 5002 – 6002, both smaller than maximum permissible difference. braun, j. et al: geodetic measurement of longitudinal displacements... geoinformatics fce ctu 12, 2014 20 7. comparison of epochs to compare epochs one to each other must be both in the same coordinate system. stability of the connecting points was checked by the comparison of the distances determined in epochs. it was found, that points are not stable and that they change their distances, points 1001 and 1014 even of 5 mm. therefore were in each epoch given the same coordinates to point 5001 and the x+ coordinate axe is given to be parallel to line 5001 – 5002. coordinates of each point was compared between epochs and plotted (figure 4), where the first epoch is the reference. figure 4: graphs of longitudinal displacements of points on the rails and deck the differences in x coordinates represent longitudinal shifts of the points. for all coordinate shifts were calculated maximum permissible differences (on the base of the standard deviations of the adjusted coordinates) according to: 2 _ 2 _ jxetixetpmet ux σσ +⋅=∆ , (2) where up is the reliability coefficient (here 2.5, because the measurement is difficult to check and may be influenced by the systematic errors) and σxet_i is standard deviation of the adjusted coordinate. mean maximum permissible difference is ∆xmet = 0.8 mm, if this one is exceeded, it can be reasonably expected, that observed point has shifted. 8. conclusions longitudinal deformations of the bridge deck and rails were performed on the railway bridge in klášterec nad oh�í in three epochs. the observing method of local geodetic combined network with ten standpoints was used. adjusted coordinates were determined with accuracy 0.2 mm in braun, j. et al: geodetic measurement of longitudinal displacements... geoinformatics fce ctu 12, 2014 21 longitudinal direction. from the presented results it implies, that deformation of the bridge deck and the rails are mutually independent. rails extend of 1 mm between pillars no. 2 and no. 3 (point 10 – 17) when the temperature changes of 10°c, while the bridge deck extends uniformly up to of 4 mm (points 1004 – 1011). acknowledgements the article was written with support from the internal grant of czech technical university in prague no. sgs14/049/ohk1/1t/11 “optimization of the acquisition and processing of 3d data for the needs of engineering surveying“. references [1] lipták, i., kopá ik, a., erdélyi, j., kyrinovi , p. (2013). dynamic deformation monitoring of bridge structure. selected scientific papers journal of civil engineering. vol. 8, issue 2, pp 13–20, issn 1338-7278, doi: 10.2478/sspjce-2013-0014 [2] bureš, j., klusá ek, l., ne as, r., švábenský, o. (2011). measuring technology during reconstruction of prestressed gagarin bridge. ingeo 2011 proceedings of the 5th international conference on engineering surveying, isbn 978-9536082-15-5 [3] kopá ik, a., erdélyi, j., lipták, i., kyrinovi , p. (2013). deformation monitoring of bridge structures using tls. in 2nd joint international symposium on deformation monitoring (jisdm). nottingham: university of nottingham [4] štroner, m. (2010). vývoj softwaru na plánování p�esnosti geodetických m��ení precisplanner 3d. stavební obzor. vol. 19, no. 3, pp. 92-95. issn 1210-4027, (in czech) [5] epek, a. (2013). gnu gama 1.14 adjustment in geodetic networks. edition 1.14 [online]. available: http://www.gnu.org/software/gama/manual/index.html [6] štroner, m., hampacher, m. (2011). zpracování a analýza m��ení v inženýrské geodézii (processing and analysis of measurements in engineering surveying). 1st edition. prague: ctu publishing house. 313 pp., isbn 978-80-01-04900-6. geoinformatics fce ctu 12, 2014 34 measurement of deformations by mems arrays, verified at sub-millimetre level using robotic total stations tomas beran1, lee danisch1, adam chrzanowski2, maciej bazanowski2 1 measurand inc., 2111 hanwell road, fredericton, new brunswick, e3c 1m7 canada, web email: tomas@measurand.com, lee@measurand.com 2 canadian centre for geodetic engineering, university of new brunswick, po box 4400, fredericton, new brunswick, e3b 5a3 canada, e-mail: adamc@unb.ca, maciej.bazanowski@unb.ca abstract measurement of sub-millimetre-level deformations of structures in the presence of ambient temperature changes can be challenging. this paper describes the measurement of a structure moving due to temperature changes, using two shapeaccelarray (saa) instruments, and verified by a geodetic monitoring system. saa is a geotechnical instrument often used for monitoring of displacements in soil. saa uses micro-electromechanical system (mems) sensors to measure tilt in the gravity field. the geodetic monitoring system, which uses alert software, senses the displacements of targets relative to control points, using a robotic total station (rts). the test setup consists of a central four-metre free-standing steel tube with other steel tubes welded to most of its length. the central tube is anchored in a concrete foundation. this composite “pole” is equipped with two saas as well as three geodetic prisms mounted on the top, in the middle, and in the foundation. the geodetic system uses multiple control targets mounted in concrete foundations of nearby buildings, and at the base of the pole. long-term observations using two saas indicate that the pole is subject to deformations due to cyclical ambient temperature variations causing the pole to move by a few millimetres each day. in a multiple-day experiment, it was possible to track this movement using saa as well as the rts system. this paper presents data comparing the measurements of the two instruments and provides a good example of the detection of two-dimensional movements of seemingly rigid objects due to temperature changes. key words: deformation monitoring, geodetic systems, geotechnical instrumentation 1. introduction automated geodetic and geotechnical monitoring systems are playing a rapidly-increasing role in risk-reduction efforts concerning structures and their interaction with soils. geodetic monitoring systems originated in the measurements and representation of the surface of the earth, while the geotechnical systems originated in the study of behaviour of earth materials ([5]). geodetic monitoring systems are represented in this paper by an automated deformation monitoring system developed by the canadian centre for geodetic engineering at the university of new brunswick. the system in this example relies on measurements of angles and distances by a robotic total station (rts), relative to stable reference points. alert uses proprietary software ([1]) for automated data collection, data transfer and data processing. geotechnical monitoring systems are represented here by a shape sensing instrument called shapeaccelarray (saa). saa is an array of rigid segments connected by flexible joints that can beran, t. et al: measurement of deformations by mems arrays, verified … geoinformatics fce ctu 12, 2014 35 bend in any direction, but cannot twist. each segment is equipped with three mems accelerometers, sensing three orthogonal components of gravity ([3]). in a typical near vertical installation, the 3d shape of the static saa is determined by sensing the acceleration of the x and y accelerometers, knowledge of the segment length (50 cm in this case) and by employing rotational transforms relating one segment to the next segment ([2]). the accelerometer output is influenced by the temperature, so all saas include digital temperature sensors, used to compensate the mems sensors for temperature-induced errors. objectives of the experiment included verifying the self-consistency of saa measurements (by comparing the two saa results), and verifying the precision of saas by comparing saa results to those from the rts system, used as a de facto standard. of particular interest was the verification of new temperature compensation algorithms in the saa software, in a field setting. the test setup described in this paper is located in hanwell, new brunswick, canada. it consists of two 4-metre vertical saas (with eight 50 cm long segments) inserted in 2.5 cm diameter steel pipes. there are four 2.5 cm diameter pipes and one 5 cm diameter steel pipe mounted on the circumference of a 12.5 cm diameter steel pipe. this 4 metre-high vertical assembly is referred to as the “pole” in this paper. the 12.5 cm diameter steel pipe is anchored in a 1.2 metre-deep concrete foundation. the geodetic network surrounding the pole consists of four control points located in the concrete foundations of nearby buildings (control1, cotrol2, control3, control4), one control point located in the concrete foundation of the pole (control5), two observed points located at the top and in the middle of the pole (top, middle), and one rts located about 5 m east of the pole. the azimuth between points station-control1 was selected as 0° 0’ 0” and roughly coincides with magnetic north. figure 1 shows the layout of the monitoring network and the location of control points on the pole. figure 2 shows the location of control points control 1, control2 and control3 in the foundation of a nearby building. figure 1: layout of the geodetic network surrounding the pole holding saas and location of observed points on the pole. the data for this experiment were collected on september 18-20, 2013. the rts used in this case was a leica tca 1800. all control points were equipped with leica “mini prisms”. rts measurements were automatically collected and processed using the alert software suite. the data collected by the two saas for about two months prior to the experiment indicate that there is about 2 mm movement observed on the top of the pole, in mostly the north-south direction. this knowledge led to the selection of a rts with a one arc second angular resolution. placing beran, t. et al: measurement of deformations by mems arrays, verified … geoinformatics fce ctu 12, 2014 36 this instrument about 5 m from the pole, perpendicular to the direction of the movement, should result in 0.025 mm resolution in the movement of the pole. the size of the movement depends largely on the temperature variation, so for that reason the measurements took place on the three consecutive days with a large temperature variation. figure 3 shows maximum daily temperature of 30 °c on day 1, 33 °c on day 2, and 29 °c on day 3. figure 2: location of control points control1, control2, and control3. minimum daily temperatures of about 8 °c were observed during the first night and about 13 °c during the second night. “hour 0” corresponds with 11:00 am local time september 18, 2013. there are data gaps from hour 15 till hour 22 and from hour 38 to hour 46 which was caused by rts power interruptions. the temperature sensors are located in the fourth segment of the saa relative to the cable (top) end, with a sensor resolution of 0.06 °c. the precision of the 4 m long saa is specified to be 0.53 mm ([4]). figure 3: temperature variation during the experiment. 2. analysis the rts data collection cycle (3 sets of angles) took about 20 minutes to complete which led to the displacement calculation interval of 30 minutes. the data collection interval for the saas was about 10 minutes and the data were down-sampled to 30 minutes to match the rts displacement calculation interval. the reference epoch for rts and saa data collection was set to hour 0 and calculated deformations for this epoch are x = 0 mm and y = 0 mm. the x-axes of the two saas were manually oriented in the direction of the nearby buildings, so it was necessary to match the orientation of the x-axis of saa1 and saa2 and rts north. this was accomplished by mathematically rotating the saa1 and saa2 displacements around the gravity vector. the path travelled by the top point during the observation period according to saa1 and rts is shown beran, t. et al: measurement of deformations by mems arrays, verified … geoinformatics fce ctu 12, 2014 37 in figure 4 and the path travelled by the top point during the same period according to saa2 and rts is shown in figure 5. figure 4: the path travelled by the top point during the observation period measured by the saa1 (red) and rts (blue). figure 5: the path travelled by the top point during the observation period measured by the saa2(red) and rts (blue). figure 4 and figure 5 show top point movement from the northwest point to the northeast point in about 4 hours. after that the top point moves southwest for about 2 hours and stays there for the next 9 hours. after that there is a data gap from about hour 15 to hour 22 when the top of the pole returns northwest and the cycle repeats in the next 24 hours. the resulting east and north deformations for the top point for saa1 and rts are shown in figure 6 and for saa2 and rts are shown in figure 7. both saa1 and saa2 and rts instrument on both sets of plots show changes in displacements during periods of the temperature rise or hour 0 to hour 5 and about 24 hours later (hour 24 to hour 29) (see figure 3). the spikes measured by both types of instruments at hour 5 to 6 and hour 29 to 30 in the east displacement, are thought to reflect a discontinuity in the temperature-driven deformation of the pole, perhaps beran, t. et al: measurement of deformations by mems arrays, verified … geoinformatics fce ctu 12, 2014 38 due to the welds between the multiple pipes of the pole. other characteristics will be discussed after figure 9. figure 6: east and north displacements for saa1 (red) and rts (blue) for the top point. figure 7: east and north displacements for saa2 (red) and rts (blue) for the top point. figure 8 and figure 9 show the east and north deformations for the middle point for saa1 and rts and saa2 and rts respectively. the saa1 and saa2 displacements show similar trends as those in figure 6 and figure 7, for top point, but on a much smaller scale. the top of the pole is free to move, while the bottom is fixed in concrete, so movement tends to increase toward the top. the top data (figure 6 and figure 7) indicate better agreement between saa1 and the rts, than for saa2 for some time periods. the saa2 disagreements correspond to times when the temperature decreases (hour 5 to hour 15 and hour 30 to hour 38). similar saa2-rts disagreements appear in figure 5. close examination of saa1 data indicate similar discrepancies during the same cooling periods, although greatly attenuated. the middle data (figure 8 and beran, t. et al: measurement of deformations by mems arrays, verified … geoinformatics fce ctu 12, 2014 39 figure 9) indicate good agreement for both saas. discrepancy at the top, but not the middle, is thought to be due to the location of the temperature sensor in each saa, in this case near the middle of each saa. temperature compensation for sensors far from the temperature sensor can suffer if there are spatial temperature gradients. such gradients could be larger in the cool evening hours. it is also possible that the tube containing saa2 does not move faithfully with the target during slow cooling. figure 8: east and north displacements for saa1 (red) and rts (blue) for the middle point. figure 9: east and north displacements for saa2 (red) and rts (blue) for the middle point. 3. summary table 1 shows the root-mean-square (r.m.s.) of the north and east displacement differences between the saa1 and rts, saa2 and rts for top point and middle point. the r.m.s. of the differences in all cases are less than 1 mm. the average east displacement r.m.s. is 0.30 mm and the average north displacement r.m.s. is 0.33 mm. beran, t. et al: measurement of deformations by mems arrays, verified … geoinformatics fce ctu 12, 2014 40 table 1: r.m.s. of the east displacement difference and north displacements difference. point r.m.s. of the east displacement difference r.m.s. of the north displacement difference� top point: saa1 vs. rts 0.30 mm 0.25 mm� top point: saa2vs. rts 0.49 mm 0.60 mm� middle point: saa1 vs. rts 0.26 mm 0.24 mm� middle point: saa2 vs. rts 0.17 mm 0.23 mm� 4. conclusions measurements of the two saas agree at the sub-millimetre level. the three-instrument comparison (saa1, saa2 and rts) also demonstrates a sub-millimetre-level agreement between the two saas and the rts. thus, both objectives (consistency and precision) were achieved over the approximate 20-25 °c temperature swings of this field test. 5. future work future work will include field testing over other temperature ranges, including higher and belowzero ambients, to further validate the temperature compensation algorithms used with saas. more spatial detail will be obtained from the rts measurements by using more targets, so that closer comparison to the saa data (saa provides data at 50 cm intervals) can be made. also, a new form of saa having temperature sensors in every segment will be used, to reduce effects of spatial temperature gradients. references [1] chrzanowski, a. and a. szostak-chrzanowski (2010). “automation of deformation monitoring techniques and integration with prediction modeling”; geomatica vol.64, no. 2 pp. 221-231. [2] danisch, l., t. abdoun, and m. lowery-simpson, (2007). shape-acceleration measurement device and method, us patent 7,296,363. [3] danisch, l., chrzanowski, a., bond, j., and bazanowski, m. (2008). “fusion of geodetic and mems sensors for integrated monitoring and analysis of deformations,” presented at 13th fig international symposium on deformation measurements and analysis, lisbon, portugal, may 12-15, 2008. [4] danisch, l., t. patterson, and j. fletcher (2011). “mems-array monitoring of a dam”, in proceedings of canadian dam association annual conference. fredericton, nb. october 15-20, 2011. [5] dunnicliff, j., (1993). geotechnical instrumentation for monitoring field performance, john wiley and sons, new york. monitoring of a concrete roof using terrestrial laser scanning ján erdélyi, alojz kopáčik, ľubica ilkovičová, imrich lipták, pavolkajánek slovak university of technology, faculty of civil engineering, radlinskeho 11, 813 68 bratislava, slovakia web site: www.svf.stuba.sk jan.erdelyi@stuba.sk abstract the paper deals with the geodetic monitoring of a parabolic shaped reinforced concrete roof structure in the chemical company duslo, ltd. in šaľa (slovak republic). the monitored structure is a part of the roof of a warehouse used for the storage of fertilizer. the atmospheric conditions and the operation load caused deformation of the construction. for measurement was used the technology of terrestrial laser scanning. the displacements of the observed parts of the construction were calculated using planar surfaces by the procedure of singular value decomposition of matrixes. the procedure of initial and 2 epochal measurements of deformations, the procedure of the data processing, and the results of the deformation monitoring are described. keywords: deformation monitoring, terrestrial laser scanning, reinforced-concrete construction, singular value decomposition 1. introduction the weather conditions and the operation load are causing changes in the spatial position and in the shape of engineering constructions, which affects their static and dynamic function and reliability. because these facts, geodetic measurements are integral parts of engineering structures diagnosis. this paper presents the geodetic monitoring of a parabolic shaped reinforced concrete roof construction of a fertilizer warehouse in the duslo, ltd., šaľa, which is the largest chemical company in the slovak republic. the operation load and the weather conditions caused shift between the blocks of the roof during the decades of operation. the measurements were done in 3 epochs during 2 months. the stability of the foundation strips of the construction was monitored by precise levelling. the deformation of the roof construction was measured using terrestrial laser scanning (tls). the advantage of tls over conventional surveying methods is the efficiency of spatial data acquisition. tls allows contactless determining the spatial coordinates of points lying on the surface on the measured object. the scan rate of current scanners (up to 1 million of points/s) allows significant reduction of time, necessary for the measurement; respectively increase the quantity of obtained information about the measured object. to increase the accuracy of results, chosen parts of the monitored construction can be approximated by single geometric entities using regression. in this case the position of measured point is calculated from tens or hundreds of scanned points (vosselman et al., 2010). geoinformatics fce ctu 13, 2014, doi:10.14311/gi.13.3 25 http://orcid.org/0000-0001-9492-2775 http://orcid.org/0000-0002-8061-4592 www.svf.stuba.sk http://dx.doi.org/10.14311/gi.13.3 ján erdélyi, j. et al.: monitoring of a concrete roof using laser scanning 2. characteristics of the measured object the measured object is used for storage of fertilizer in the chemical company duslo, ltd., šaľa. it consists of a reinforced concrete construction with dimensions 30 m x 170 m and with height 14 m. on the roof in the middle part of the construction is situated a conveyor along the whole warehouse. the warehouse is founded on foundation strips (with dimensions 3.7 m x 172.0 m x 1.5 m) and is divided into 5 blocks. figure 1: the construction of the warehouse the roof consists of a parabolic shaped reinforced concrete construction with parabolic transverse beams (with axial distance 4.8 m). the warehouse was built in year 1960. the operation load of the conveyor, and the weather conditions caused deformation of the roof construction during the decades of operation. the mentioned reasons caused a shift approximately 150 mm between the 1st and the 2nd block, which is visible at the dilatation. the aim of the measurements was the geodetic monitoring of the parts of the roof construction near the dilatation joints, and the determination of the rate of changes. 3. deformation monitoring considering the unclear cause of deformations, the monitoring was designed to be able not only to quantify the movements of the mentioned parts of the roof structure, but even the eventual motions of foundations. the measurements were done in 3 epochs during 2 months, on october 7th, october 21st and december 2nd 2013. the aim of the monitoring was to determine the rate of the displacements, and to determine their influence to the secure operation. the deformations of the roof construction were monitored using terrestrial laser scanning, and the behavior of the foundations was measured by precise levelling. 3.1. precise levelling the monitoring of the foundation strips was performed in 3 measurement epochs (mentioned above) using precise levelling. the height of 8 measured points (n1.1-n2.4) was determined relative to the height of 3 control points (vb1-vb3) in a local height system (fig. 2). the control points are situated near the monitored object, in the footings of the pylons of a pipeline nearby the warehouse. these are stabilized by ground benchmarks. the stability of geoinformatics fce ctu 13, 2014 26 ján erdélyi, j. et al.: monitoring of a concrete roof using laser scanning the reference net was determined comparing the height differences between the points in each epoch. figure 2: position of measured and control points – precise levelling the measured points are situated on the beginning and end of 1st and 2nd block on the both sides of the warehouse. the points are stabilized by wall benchmarks in the bottom part of the parabolic transverse beams. the vertical displacements of these points were determined as the difference between the heights of these points in each epoch. table 1: vertical displacements – precise levelling point no. vertical displacements of observed points considering to initial measurement (october 7th 2013) height of october 21th 2013 december 2nd 2013points h [mm] h σh decision h σh decision[mm] [mm] [mm] [mm] n1.1 100.2167 0.0 0.28 no shift +0.7 0.28 5% n1.2 100.0306 -0.3 0.28 5% 30% -0.6 0.28 5% 30% n1.3 99.7724 +0.1 0.28 no shift -------n1.4 100.2479 +0.1 0.28 no shift -0.4 0.28 5% 30% n2.1 99.9793 -0.1 0.28 no shift +0.6 0.28 5% 30% n2.2 100.0296 -0.6 0.28 5% 30% -0.2 0.28 no shift n2.3 100.2393 +0.1 0.28 no shift -0.3 0.28 5% 30% n2.4 99.9995 -0.5 0.28 5% 30% 0.0 0.28 no shift the statistical significance of the displacements was determined on the basis of the statistical analysis using interval estimates. the measurement did not shown any displacements on most of the observed points; respectively, the risk of the decision is 5% 30%. due to the results of precise levelling, it can be assumed that the foundation strips of the structure are stable, or the movements are slow without influence to the security. geoinformatics fce ctu 13, 2014 27 ján erdélyi, j. et al.: monitoring of a concrete roof using laser scanning 3.2. terrestrial laser scanning the monitoring of the roof structure was performed using tls leica scanstation2. the bottom side of the roof was scanned from a single position of the scanner. scanned was a 1 m wide strip on the both sides of the dilatation (fig. 3). it was not possible to scan strip along the whole dilatation, because the fertilizer was not removed from the left side of the mentioned part of the structure. the scanner was positioned in each epoch approximately in the same position to ensure the same conditions of the measurements (distance from the scanner, angle of incidence of the measuring signal). figure 3: position of measured and control points – terrestrial laser scanning the reference network consists of three control points stabilized on the pillars of the warehouse frame by metallic fasteners (it was possible due to the stability of the foundations). all of the control points were signalized by the leica hds targets. to improve the efficiency of the measurements, a simple script was defined before scanning in each epoch. this script defines a separate field of scanning for different parts of the construction, the scan resolution in each field, and the target acquisition. the minimal point density on the surface of the roof was 10 mm x 10 mm. the data obtained by the tls were transformed to a local coordinate system. the accuracy of the transformation was calculated from the differences between the common identical points after the transformation. the main task of the data processing was modelling the position of the measured points by small planar surfaces. these are positioned on the bottom side of the roof every 2 m on the both sides of the dilatation (fig. 3). the vertical displacements of the measured points were determined as the difference between the heights of these points in each epoch. the height of the points was calculated using orthogonal regression. the vertical displacements were recalculated to orthogonal displacements along the normal vector to the surface in each part of the structure. during the data geoinformatics fce ctu 13, 2014 28 ján erdélyi, j. et al.: monitoring of a concrete roof using laser scanning processing of the initial measurement, square fences of 0.1 m x 0.1 m were defined on the bottom side of the roof. these fences approximately define the same set of points in each epoch. orthogonal regression is calculated from the general equation of a plane by applying singular value decomposition: a = uσvt where a is the design matrix, with dimensions n × 3, and n is the number of points used for the calculation. the design matrix contains the coordinates of the point cloud reduced to a centroid. the column vectors of unxn are normalized eigenvectors of matrix aat. the column vectors of v3x3 are normalized eigenvectors of ata. the matrix σn×3 contains eigenvalues on the diagonals. then the normal vector of regression plane is the column vector of v corresponding to the smallest eigenvalue from σ (lacko, 2008, čepek, 2009). the mean errors of the displacements were obtained using the law of propagation of uncertainty, from the mean error of the transformation and the mean error of the regression planes, which were calculated from the orthogonal distance of points from these planes. the statistical significance of the displacements was determined on the basis of the statistical analysis using interval estimates (kopáčik et al., 2013). table 2 shows the displacements (vertical and orthogonal) of the selected points observed. table 2: displacements – terrestrial laser scanning displacements of observed points considering to initial measurement (october 7th 2013) point october 21th 2013 december 2nd 2013 no. orth. orth. ∆z σ decision disp. decision ∆z σ decision disp. decision [mm] [mm] [mm] [mm] [mm] [mm] p1.1 0 1.7 no shift 0 no shift 0 1.4 no shift 0 no shift p1.2 0 2.2 no shift 0 no shift -3 1.6 5%30% -1 no shift p1.3 -2 2.1 no shift -1 no shift 1 1.7 no shift 1 no shift p1.4 -2 1.8 no shift -1 no shift 1 1.5 no shift 1 no shift p1.5 -3 1.6 5%-30% -2 5%30% -1 1.5 no shift -1 no shift p1.6 -4 1.4 5% -2 5%30% -2 1.3 5%30% -2 no shift p1.7 -4 1.4 5% -3 5%30% -4 1.1 5% -3 5%30% p1.8 -3 1.4 5% -2 5%30% -5 1.3 5% -4 5% p1.9 -3 1.7 5%-30% -3 5%30% -5 1.3 5% -5 5% p1.10 -1 2.2 no shift -1 no shift -3 1.9 5%30% -3 5%30% p1.11 0 2.2 no shift 0 no shift -4 1.6 5% -4 5% p1.12 0 2.1 no shift 0 no shift -4 1.6 5% -3 5%30% p1.13 2 2.2 no shift 2 no shift -1 1.7 no shift -1 no shift p1.14 2 2.1 no shift 2 no shift 0 1.9 no shift 0 no shift p1.15 3 2.1 5%-30% 2 no shift 2 1.9 no shift 1 no shift p1.16 4 2.0 5%-30% 2 no shift 3 1.6 5%30% 2 5%30% p1.17 2 1.8 no shift 1 no shift 0 1.5 no shift 0 no shift p1.18 0 1.9 no shift 0 no shift -1 1.8 no shift 0 no shift the measurement did not show any displacements on most of the observed points; respectively, the movements are slow with the risk of the decision 5% 30%. the fig. 4 shows the graphical geoinformatics fce ctu 13, 2014 29 ján erdélyi, j. et al.: monitoring of a concrete roof using laser scanning representation of the displacements of the selected points between the initial measurement (october 7th) and the 2nd measurement epoch (october 21st). figure 4: displacements in the 1st control epoch (right), and the detail of vertical and orthogonal displacements (left) 4. conclusion the aim of the monitoring described in this paper was to determine the rate of the displacements, and to determine their influence to the secure operation of the warehouse. the results did not show significant displacement of the monitored part of the warehouse construction (see table 1 and table 2). due to these facts the measurements planned for the future were cancelled. mechanical measuring equipment was mounted on the mentioned part of the structure. the eventual deformation will be monitored visually, by the operating staff of the warehouse. references [1] kopačik, a. et al. 2013. deformation monitoring of bridge structures using tls. in 2 nd joint international symposium on deformation monitoring [usb]. nottingham: university of nottingham, 2013, 8 p. [2] lacko, v. 2008. singular value decomposition and difficulties of software implementation of golub algorithm and its determination: student sciense conference. bratislava: comenius university in bratislava, 2008. 69 p. [3] vosselman, g. – maas, h. g. 2010. airborn and terrestrial laser scanning. dunbeath: whittles publishing, 2010. 318 p. isbn 978-1904445-87-6. [4] čepek, a. and pytel, j. 2009. a note on numerical solutions of least squares adjustment in gnu project gama, in pilz j., editor, interfacing geostatistics and gis, springer berlin heidelberg, pp. 173-187. doi:10.1007/978-3-540-33236-7_14 geoinformatics fce ctu 13, 2014 30 http://dx.doi.org/10.1007/978-3-540-33236-7_14 full texts in the czech geographical bibliography database eva novotná director of map collection head of geographical library charles university in prague faculty of science albertov 6, 128 43 praha 2 abstract open access to the documents is one of the basic requirements of databases users. czech geographical bibliography on-line provides access to 185,000 bibliographical records of bohemical geographic and cartographic documents and to more than 30,000 full texts and objects. the access is provided through a connection from the permanent storage, the digital university repository or a url address of the bibliographical record. the works in public domain can directly become accessible or it is necessary to conclude licence agreement with authors, their heirs or with the editors of periodicals. full texts of 14 titles of professional periodicals, university thesis, employees´ monographs or anthologies and on-line publications are available. digitised maps have been connected to the database since 2012. 5,500 of them are accessible from the database since the beginning of 2014. the database is an important source both for professionals and general public interested in geography and cartography. keywords: open access, geobibline, geographical databases, cartographic documents 1. introduction providing access to full texts and objects is nowadays one of the regular requirements of the information service users. the requirement of open access to information, so called open access (hereafter oa) was internationally declared several times in budapest (2002), in bethesda (2003) and finally in berlin (2003) (bartošek, m., 2009). charles university signed on to the berlin declaration in 2013 (uk, 2013). bibliographic database czech geographical bibliography on-line1 (hereafter geobibline) that has been produced since 2008 primarily by the geographical library of the faculty of science of charles university in prague (novotná, 2011) in cooperation with other libraries headed by the national library of the czech republic allows to generate metadata and subsequently to make accessible full texts of geographic and cartographic expert articles, monographs, chapters of books and anthologies, but also maps and graphics (novotná, 2013). the accessibility depends on many factors: copyright laws, financing the production of bibliography of articles, creating metadata, preparation and editing of full texts and pictures, 1 http://www.geobibline.cz geoinformatics fce ctu 13, 2014, doi:10.14311/gi.13.2 19 http://dx.doi.org/10.14311/gi.13.2 novotná, e.: full texts in the czech geographical bibliography database sufficient space for data storage in the repository, provision of lifespan and migration of data, but also on technologies of access. the first concern is the copyright law (novotná, vondráková, 2012). on the one hand the documents of which the propriety copyright laws are already in public domain can be accessible; on the other hand it is possible to conclude exclusive or non-exclusive license contracts with authors, their heirs or with administrators of copyright. the geographical library of the faculty of science of charles university has been using both ways. the bibliography of articles was funded from the project of the czech ministry of culture in 2008-2011. original articles were created in the geographical library. thanks to the czech ministry of culture temap2 project (technologies for making the map collections accessible) there are professionally catalogued and made accessible primarily digitised cartographic documents but also graphics and full texts of the articles. metadata are usually generated from bibliographic records saved in the format marc 213. full texts and pictures are saved in the digital university repository but also in the local storage at the faculty of science of charles university. the access to data is carried out through a database where the full texts and pictures are attached as external links. 2. geobibline database the geobibline database describes 185,000 bohemical documents and provides access to a total of over 30,000 full texts and objects with metadata (to 1st april, 2014). it belongs among the world´s largest non-commercial databases in the field of geography. (novotná, 2012) it was primarily formed for the bibliography of a wide scale of documents from the field of geography and cartography of the 20th and 21st century. it was later extended chronologically to 1450. in content it was extended by full texts and objects. to ensure oa the database is using both green and gold way as well as other options. the green way means free access allowed by the author, the gold way is opened by the publisher and usually it is paid by the author. after the five-year experience from the czech environment it can be observed that making something accessible in oa is not always so easily definable. there is a whole range of differences both in the editorial policy and in the approach of the authors and their heirs. in principle, a very individual approach is always required. the connection is carried out through the permanent storage of the faculty of science of charles university, if the license for the work was obtained or if it is a work in public domain. the second, but unreliable option is to link url addresses into the bibliographical record. both types of connected full texts will display to the user as a result of the search in the left part of the screen as external links. after clicking the link it is necessary to agree the information on copyright law, where the researcher agrees to use the work only for his own need. after that the full text or the object opens. it can again have several forms depending on the way of connecting to the system. 2.1. full texts in the permanent storage the database contains full texts of articles, monographs, chapters of books and anthologies, as well as statistics. license contracts are gradually concluded with the editors of magazines, 2 http://www.temap.cz 3 http://www.loc.gov/marc/bibliographic/ecbdhome.html geoinformatics fce ctu 13, 2014 20 novotná, e.: full texts in the czech geographical bibliography database figure 1: example of the search interface where the external links to the full texts will display each of which has its own publication policy. sometimes the articles are published electronically before the printed release. it is led by an effort to speed up the process of transferring data to a potential interested person who may use and quote the article in the scientific research. full texts of following titles are accessible: auc geographica, acta onomastica, demografie, folia facultatis scientiarum naturalium universitatis purkynianae brunensis. geographia, geodetický a kartografický obzor, geografické rozhledy, geografie, kartografický přehled, moravian geographical reports, opera corcontica, scripta facultatis scientiarum naturalium universitatis purkynianae brunensis. geographia, sociologický časopis, urbanismus a územní rozvoj, vodní hospodářství, vodohospodářské technicko-ekonomické informace and vojenský geografický obzor. detailed descriptions of the titles that are excerpted and supplemented by full texts can be found on the database website in the section excerpted periodicals4. the amount of accessible volumes depends on the possibilities of the editors and of the geographical library. in the last ten years the magazines have been editorially adapted from electronic data and the editors have usually archives available. the older articles are necessary to be scanned and read up by ocr software. the database excerpts 45 profile periodicals from the first volume. it then provides an access to 14 titles with full texts. the accesses to monographs, chapters, anthologies are provided either directly by regular authors, i.e. employees of the charles university or they are documents published on the internet. the czech statistical office data sets, that are commonly available electronically, are connected as well. university theses are also a part of the database. full texts have been accessible since 2010. license contracts on school works are concluded with students. geobibline provides access to works from masaryk university and charles university. 4 http://www.geobibline.cz/cs/node/29 geoinformatics fce ctu 13, 2014 21 novotná, e.: full texts in the czech geographical bibliography database license contracts are concluded with important authors or their heirs. the first contract was concluded with prof. k. kuchař, who provided license rights to the work of his father k. kuchař. professor karel kuchař (1906-1975) was an important representative of geography and cartography. articles, monographs, reviews, new year cards and reports have been digitized, described and added to the database. some of the digitized documents were obtained through the contract from the moravian library5 project national digital library. a web portal devoted to prof. kuchař´s6 work was created. 276 documents have been made accessible in the pdf format (chrást, 2014). a license contract with eng. dvořáček, the heir of anna dvořáčková who is a co-author of kuchař´s book on the moll map collection7 in brno was concluded in 2013. 2.2. full texts from the linked url addresses linking full texts published on the internet represents the second way of access. this method is not ideal, particularly because of the lack of stability of addresses and different forms of publication that are problematic. the point is that the editors often publish whole volumes or issues, which slows down the search and it is not possible to connect them directly to a specific bibliographic record of the article. in spite of a regular control of internet addresses functionality the method is not reliable. however, there is no other option until the license contract is concluded. 2.3. cartographic documents in the database the database czech geographical bibliography on-line makes also accessible cartographic documents, old maps, atlases and globes (until 1850). altogether it contains 31,000 bibliographic records of such special documents. 23 of them originated in the 16th century, 123 in the 17th century, 776 in the 18th century, from the 19th century then entire 4546 records, from the 20th century 18432 and finally from the 21st century 7874 records. the charles university computer centre has prepared a script that lists newly imported objects to a file. the script lists pid, scans the barcode field identifier in the technical metadata. afterwards it sends a query to aleph and adds to the pid record system a number and a title of the map sheet (according to marc 21 field) for better traceability. thus a generated list is sent to the map collection of the faculty of science. bibliographic records are then connected to the objects of maps and atlases through z39.50 protocol based on the title from the generated list. (novotná, 2013). searches for bohemical documents are made for the geobibline database and then they are connected to the database. it is possible to view maps, zoom in and out by clicking on an external link from the database catalogue. the maps are available in the university repository8 or directly from the geobibline database in the jpeg2000 format at a resolution of 300 dpi under the icon fulltext. the descriptive metadata can be searched in two interfaces (simple and advanced), according to fields: title, author (inverted), secondary authors, subject heading, publisher, year of publication, genre/form and language. the records may be also only browsed through and 5 https://www.mzk.cz/o-knihovne/odborne-cinnosti/ndk 6 http://web.natur.cuni.cz/gis/kuchar/ 7 http://mapy.mzk.cz/en/ 8 http://repositar.cuni.cz geoinformatics fce ctu 13, 2014 22 novotná, e.: full texts in the czech geographical bibliography database figure 2: sample of a map display from the geobibline database metadata can be viewed in three formats. the help icon (a question mark) is up on the right. the access is possible both in czech and english (globe icon). metadata are in mix format. the geobibline database provides an on-line access to 20 maps from the 16th century, 96 maps from the 17th century, 689 maps from the 18th century and 4695 from the 19th and 20th centuries. 3. conclusion the database development continues thanks to the temap project. a shared collection of bibliographic records from participating libraries and original cataloguing of the works in the geographical library and in the map collection of the faculty of science are going on. digitized maps are also gradually being uploaded to the database and connected to the records. the digital repository contains 32,000 digitized cartographic documents. by the end of the project in 2015 their number should increase of 20,000. not all of them are, however, cartographic bohemics. even so, their share will be fairly high. photographs of globes and telluriums are beginning to be connected as well. the database websites collect also statistical information on accesses from abroad. apart from the czech users the websites are frequently visited from the united states of america, germany, the netherlands, russia, ukraine and spain. on the contrary, the number of visitors from slovakia is surprisingly low. acknowledgement this article was supported through the project of the czech ministry of culture, df11p01ovv003, temap – technology for discovering of map collections of the czech republic: methodology and software for protection and use of cartographic works of the national cartographic heritage. geoinformatics fce ctu 13, 2014 23 novotná, e.: full texts in the czech geographical bibliography database references [1] bartošek, m. 2009. open access otevřený přístup k vědeckým informacím. úvod do problematiky. zpravodaj úvt mu. issn 1212-0901, 2009, roč. xx, č. 2, s. 1-7. [2] chrást, j. 2014. nový web věnovaný prof. karlu kuchařovi. ikaros [online]. 2014, roč. 18, č. 2 [cit. 04.04.2014]. available: http://www.ikaros.cz/node/8155 [3] novotná, eva a kolektiv. 2011. geografická bibliografie čr online: geobibline. praha : všcht. 152 s. : il. isbn 978-80-7080-773-6. available: http://vydavatelstvi. vscht.cz/katalog/uid_isbn-978-80-7080-773-6/anotace/. [4] novotná, eva a alena vondráková. 2012. zpřístupnění a užití digitalizovaných kartografických děl. geografické rozhledy. 2012, roč. 22, č. 3, příloha, s. 1-4. issn 12103004. [5] novotná, eva. 2012. base de datos geobibline y su comparación con las bases geográficas existentes en el mundo. el profesional de la información. roč. 21, č. 3, s. 304-311. issn 1386-6710. [6] novotná, eva. 2013. staré mapy a grafiky v geografické bibliografii čr on-line. knihovna knihovnická revue [online]. roč. 24, č. 1, s. 5-27. issn 1801-3252. available: http://knihovna.nkp.cz/knihovna131/13105.htm [7] univerzita karlova. 2013. uk se přihlásila k berlínské deklaraci open access. iforum [online]. 2013, [cit. 04.04.2014]. available: http://iforum.cuni.cz/iforum-14799.html geoinformatics fce ctu 13, 2014 24 http://www.ikaros.cz/node/8155 http://vydavatelstvi.vscht.cz/katalog/uid_isbn-978-80-7080-773-6/anotace/ http://vydavatelstvi.vscht.cz/katalog/uid_isbn-978-80-7080-773-6/anotace/ http://knihovna.nkp.cz/knihovna131/13105.htm http://iforum.cuni.cz/iforum-14799.html accuracy evaluation of pendulum gravity measurements of robert daublebsky von sterneck alena pešková, jan holešovský department of geomatics, faculty of civil engineering czech technical university in prague thákurova 7, 166 29 prague 6, czech republic alena.peskova@fsv.cvut.cz, jan.holesovsky@fsv.cvut.cz abstract the accuracy of first pendulum gravity measurements in the czech territory was determined using both original surveying notebooks of robert daublebsky von sterneck and modern technologies. since more accurate methods are used for gravity measurements nowadays, the work [3] is mostly important from the historical point of view. in previous works [5], the accuracy of sterneck’s gravity measurements was determined using only a small dataset. here we process all sterneck’s measurements from the czech territory (a dataset ten times larger than in the previous works [5]), and we complexly assess the accuracy of these measurements. locations of the measurements were found with the help of original notebooks. gravity in the site was interpolated using gravity model egm08, resultant gravity is in actual system s–gr10. finally, the accuracy of sterneck’s measurements was evaluated on the base of the differences between the measured and interpolated gravity. keywords: robert daublebsky von sterneck, relative pendulum measurements, gravity. 1. introduction robert daublebsky von sterneck (* 7.2.1839, † 2.11.1910) was born in prague, he acted as geodesist, astronomer and geophysicist. he was head of astronomical observatory institute in vienna in 1880-1884 and also he was the first to make gravimetrical measurements in the austria-hungary. although he worked in army all his life, he also did various surveying and astronomical measurements. his work was recognized and his name is famous also nowadays. the pendulum instrument built by sterneck himself was used for gravity measurements, and its improved version was also used in other countries in europe. daublebsky used a relative method to measure gravity. only the time of swing of the pendulum was measured with four implemented corrections. the initial gravity point was located in the cellar of military geographical institute in vienna, with value g = 980 876 mgal [2]. we divided sterneck’s measurements to two datasets. the first rule for division was different localities of the measurements (measurements on hilltops near trigonometrical points, and measurements in buildings in towns). the second rule was the time of measurements (there is a 3 year gap between the two datasets). geoinformatics fce ctu 14(1), 2015, doi:10.14311/gi.14.1.3 39 http://orcid.org/0000-0002-3338-3779 http://orcid.org/0000-0003-0503-7569 http://dx.doi.org/10.14311/gi.14.1.3 http://creativecommons.org/licenses/by/4.0/ a. pešková and j. holešovský: accuracy of pendulum gravity measurements 2. localization of sterneck’s gravity measurements the original daublebsky’s surveying notebooks [6] and a summary of results in technical report [4] were used for gravity measurement localization. the technical report contains approximate astronomical coordinates of the measurements, whereas detailed information about the measurement process and locations is given in the notebooks. from the technical report, were used these informations: year of measurement, number and title of point (czech and germany), latitude and longitude, elevation and the measured value of gravity. only the details about the locations were used from the notebooks. these details were not registered for all measured points, 15 points measured in towns haven’t had any information about their location (these points were locallized only by approximate coordinates and heights). the measurements were divided into two groups: both by measurement location and by the time measurement. in 1889 – 1895, 106 points were determined in the czech territory, as is shown in figure 1. the first group of points is located on hilltops close to know trigonometric points – hilltop dataset (blue circles in figure 1). in 1889 – 1891 were determined 35 points in close trigonometric points and 6 points with differently locations in the czech territory. in 1894 – 1895 (after a 3 year gap), the second group of 65 points was measured in buildings inside towns in the moravian territory – building dataset (green squares in figure 1). figure 1: locations of sterneck’s gravity measurements. 3. determination of the gravity differences we used the arcmap program to determine the coordinates of the measurements with joint wms provided by the czech office for surveying, mapping and cadastre (čúzk). coordinates of the locations with error estimates and corrections for heights (e.g. measurement in a building or on top of a lookout tower) was provided by the department of gravimetry, land survey office (zú). they intepolated the complete bouguer anomaly using the methods of ordinary kriging. the results of interpolation are the most probable values of gravity for the geoinformatics fce ctu 14(1), 2015 40 a. pešková and j. holešovský: accuracy of pendulum gravity measurements referenced locations, given with their upper and lower estimate limits. the gravity value is found in this interval with 95% probability. the limits are affected by the uncertainty in elevation and position. the estimated interval isn’t symmetrical and it is different for each of the measured points. throughout this work, only the most probable gravity values were used. the gravity differences are calculated as the difference between sterneck’s measured gravity and the interpolated gravity. these differences were used to evaluate the accuracy of daublebsky’s pendulum gravity measurements. 4. data analysis the differences between the measured and interpolated gravity values are distinctly different for the hilltop and the building dataset. the differences gravity in the building dataset show a systematical offset +21.7 mgal, shown in figure 2. this displacement represents a 72 meters error in elevation. the cause of this displacement isn’t known, therefore both datasets were processed separately. a surprising fact about building dataset is that the gravity differences for points without precise location information (only approximate coordinates and heights) and points with these information weren’t significantly different. this is illustrated in figure 3. -50 -30 -10 10 30 50 70 hilltop dataset building dataset hilltop dataset, building dataset m e a su re d g ra vi ty in te rp o la te d g ra vi ty [ m g a l] figure 2: differences between measured and interpolated gravity for both datasets. the datasets were tested for data quality. dependencies between various quantities were tested for this purpose using hypothesis verification. the computed correlation coefficient was compared with its critical value. the tested hypotheses are: gravity falls with growing elevation – (h1), gravity grows with growing latitude – (h2), and gravity and longitude are independent – (h3). all three hypotheses were verified for the hilltop dataset. in the building dataset, h2 and h3 were also verified, but h1 not. because all of the tested quantities in the building dataset are all right, we think that the elevation values are also affected by an error different from gaussian noise. still, the building dataset was used in other processing. the accuracy of sterneck’s measurements was evaluated by several methods. first, we detergeoinformatics fce ctu 14(1), 2015 41 a. pešková and j. holešovský: accuracy of pendulum gravity measurements 370 375 380 385 390 395 400 405 410 415 420 425 430 435 440 -20 -10 0 10 20 30 40 50 60 70 pointsvwithvlocationvinformation pointsvwithoutvlocationvinformation pointvnumberv m e a su re d vg ra vi ty vvin te rp o la te d vg ra vi ty v[ m g a l] figure 3: differences between measured and interpolated gravity for the building dataset. mined the value mean gravity difference. this value shows the magnitude of the difference between sterneck’s measured gravity and the interpolated gravity. the hilltop dataset has mean gravity difference +11.6 mgal, and +33.3 mgal is for the building dataset. these means are apparently affected by an unknown displacement of the used gravity systems. however, the computed differences are only valid with the assumption of null displacement between the gravity systems. if we want to compare the accuracy of the past and recent measurements, we must calculate the mean difference from absolute value of gravity difference. the datasets are characterized by the mean absolute value of gravity difference of 12.9 mgal for the hilltop dataset and 33.3 mgal for the building dataset. the second method is to evaluate the precision of the measurements using standard deviation of the mean gravity difference. this value shows precision of the measurement method and removes the systematic displacement between the two datasets. both datasets have identical value of standard deviation equal to 10.3 mgal. the conclusion is that both datasets have identical measurement accuracy, although they were determined with different conditions and in a different environment. 5. discussion and conclusion sterneck’s measurements were divided into two dataset differing by both the type of the measurement locations and the time of their acquisition. the statistical processing and evaluation was done separately because of these differences. the building dataset is displaced systematically by about +21.7 mgal from the hilltop dataset (mean gravity difference 11.6 mgal for the hilltop dataset and 33.3 mgal for the building dataset). the cause of this systematic displacement is unknown. the building dataset was determined after 3 year gap. during this time some parameters of the pendulum instrument or some changes in way of calculating corrections could be changed. these changes probably can cause the systematic displacement between both of datasets. therefore the accuracy of sterneck’s measurements is better assessed by standard deviation of the mean difference. that is 10.3 mgal and is identical for both datasets. this value can be compared with sterneck’s precision estimate of geoinformatics fce ctu 14(1), 2015 42 a. pešková and j. holešovský: accuracy of pendulum gravity measurements 10 mgal [4]. the mean of gravity difference 11.6 mgal for the hilltop dataset and 33.3 mgal for the building dataset can be compared to measurements in hungary where the errors of sterneck’s measurements are up to ±20 mgal [5], (but the difference for some points is up to 25 mgal [1]). acknowledgement the authors thank the employees of the department of gravimetry, land survey office (zú) in prague; martin lederer, who borrowed the original surveying notebooks of robert daublebsky von sterneck and otakar nesvadba, who interpolated the gravity values. references [1] alexandr drbal and milan kocáb. “významný rakouský generálmajor dr.h.c. robert daublebsky von sterneck”. in: geodetický a kartografický obzor 56(98).2 (2010), pp. 40– 46. url: http://archivnimapy.cuzk.cz/zemvest/cisla/rok201002.pdf. [2] martin lederer. “historie kyvadlových měření na území české republiky”. in: geodetický a kartografický obzor 58(100).6 (2012), pp. 129–133. url: http://archivnimapy.cuzk. cz/zemvest/cisla/rok201206.pdf. [3] alena pešková. “hodnocení přesnosti kyvadlových tíhových měření r. sternecka”. master thesis. czech technival university in prague, 2015. url: http://geo.fsv.cvut. cz/proj/dp/2015/alena-peskova-dp-2015.pdf. [4] zdeněk šimon. kyvadlová měření v letech 1956 1962. tech. rep. geodetický a topografický ústav v praze, 1962. [5] v. b. staněk and j. potoček. “vývoj a způsob měření intensity tíže v čechách a na moravě”. in: zeměměřičský obzor 1(28).6 (1940), pp. 81–87. [6] robert sterneck. “měřické sešity 1889 1895”. vojenský zeměpisný ústav ve vídni. unpublished. geoinformatics fce ctu 14(1), 2015 43 http://archivnimapy.cuzk.cz/zemvest/cisla/rok201002.pdf http://archivnimapy.cuzk.cz/zemvest/cisla/rok201206.pdf http://archivnimapy.cuzk.cz/zemvest/cisla/rok201206.pdf http://geo.fsv.cvut.cz/proj/dp/2015/alena-peskova-dp-2015.pdf http://geo.fsv.cvut.cz/proj/dp/2015/alena-peskova-dp-2015.pdf a. pešková and j. holešovský: accuracy of pendulum gravity measurements table 1: input – part 1 year of number of latitude longitude altitude measured gravity measurement point [° ’] from ferro [° ’] [m] [mgal] 1889 49 49 24 32 38 738 980 856 50 49 36 32 20 712 980 887 51 49 55 32 27 545 980 938 52 50 44 33 24 1602 980 762 53 50 08 32 08 356 981 016 54 50 33 31 36 835 980 924 55 50 08 32 39 213 981 070 56 49 57 32 51 470 980 952 57 50 22 31 57 205 981 076 58 50 23 31 57 459 981 019 59 50 25 31 40 202 981 060 60 50 26 31 41 417 980 998 61 50 25 31 41 250 981 055 1890 62 49 14 31 58 624 980 846 63 49 22 31 29 585 980 851 64 49 39 31 31 842 980 855 65 49 48 31 45 659 980 911 66 49 49 31 20 716 980 893 67 50 01 30 40 822 980 922 68 50 12 31 25 534 980 983 69 50 34 31 08 921 980 920 70 50 48 31 47 748 980 963 71 50 44 32 39 1010 980 915 72 50 25 32 59 430 981 016 73 50 32 32 23 565 980 989 74 49 58 30 10 939 980 862 75 49 40 30 39 537 980 937 76 49 26 30 52 724 980 877 78 49 00 31 29 1362 980 663 79 48 52 31 57 1084 980 716 80 48 46 32 15 869 980 760 1890 81 49 39 32 59 709 980 849 82 49 47 33 24 662 980 895 1891 85 49 30 33 30 693 980 881 86 49 19 33 11 732 980 873 87 49 05 32 51 731 980 819 88 49 10 33 22 710 980 861 89 49 22 33 45 639 980 841 90 49 11 33 56 513 980 846 91 49 05 34 16 201 981 004 92 48 52.0 34 19.0 550 980 853 geoinformatics fce ctu 14(1), 2015 44 a. pešková and j. holešovský: accuracy of pendulum gravity measurements table 2: input – part 2 year of number of latitude longitude altitude measured gravity measurement point [° ’] from ferro [° ’] [m] [mgal] 1894 371 48 51.3 34 47.7 160 980 943 372 49 00.6 34 47.8 193 980 917 373 48 59.7 34 31.5 226 980 943 374 48 58.9 34 11.3 181 980 957 375 49 03.0 33 58.8 246 980 961 376 48 59.1 33 44.5 355 980 937 377 49 03.3 33 28.5 465 980 925 1895 378 50 26.3 33 01.3 273 981 057 379 50 14.5 33 09.5 228 981 068 380 50 02.3 33 26.8 214 981 076 381 49 54.6 33 03.5 263 981 054 382 49 36.5 33 14.7 428 980 946 383 49 45.7 33 34.3 569 980 935 378 50 26.3 33 01.3 273 981 057 379 50 14.5 33 09.5 228 981 068 380 50 02.3 33 26.8 214 981 076 381 49 54.6 33 03.5 263 981 054 382 49 36.5 33 14.7 428 980 946 383 49 45.7 33 34.3 569 980 935 384 49 42.9 33 55.9 555 980 955 385 49 57.3 33 49.7 287 981 030 386 49 11.7 34 16.5 235 980 962 387 49 02.3 34 17.1 191 980 979 388 48 59.9 33 01.0 506 980 911 1895 389 49 23.7 33 15.5 514 980 940 390 49 21.3 33 40.7 425 980 955 391 49 33.7 33 36.6 574 980 922 392 49 31.4 33 55.5 554 980 942 393 49 21.0 34 05.3 270 980 999 394 49 29.3 34 19.7 396 980 969 395 49 35.4 34 33.3 410 980 953 396 49 35.4 34 55.3 225 981 026 397 49 16.7 34 40.0 254 981 001 398 49 21.5 35 02.3 200 980 983 399 49 06.3 35 03.7 209 980 958 400 49 01.4 35 18.8 248 980 932 401 49 08.4 35 40.7 390 980 892 402 49 13.7 35 20.2 231 980 959 403 49 24.0 35 20.5 316 980 972 404 49 32.9 35 24.2 256 981 010 405 49 20.4 35 39.7 340 980 954 geoinformatics fce ctu 14(1), 2015 45 a. pešková and j. holešovský: accuracy of pendulum gravity measurements table 3: input – part 3 year of number of latitude longitude altitude measured gravity measurement point [° ’] from ferro [° ’] [m] [mgal] 1895 406 49 21.9 35 58.5 510 980 906 407 50 33.8 33 34.9 415 981 052 408 50 39.8 33 29.1 610 981 045 409 50 36.7 33 10.4 462 981 052 410 50 24.3 33 21.0 335 981 039 411 50 30.8 33 41.0 359 981 097 412 50 35.2 33 59.9 405 981 085 413 50 25.1 33 49.8 337 981 069 414 50 09.9 33 56.6 321 981 014 415 50 02.2 34 10.0 368 981 007 416 49 54.8 34 16.8 387 981 002 417 50 05.1 34 25.6 567 980 972 418 50 09.8 34 36.8 536 980 969 419 49 53.0 34 32.3 301 981 000 420 49 45.5 34 19.9 350 981 002 421 49 46.3 34 47.3 235 981 025 422 50 04.2 34 45.6 489 981 005 423 50 13.9 34 52.5 441 981 023 424 50 23.5 34 40.4 339 981 043 425 50 16.5 35 22.9 238 981 081 426 50 07.4 35 03.1 519 981 003 427 49 47.7 35 06.6 550 980 944 428 49 58.0 35 16.2 550 980 999 429 50 05.4 35 22.7 313 981 041 433 49 32.9 35 52.9 406 980 973 435 49 45.1 36 18.3 308 980 972 436 49 34.7 36 26.0 386 980 973 geoinformatics fce ctu 14(1), 2015 46 ________________________________________________________________________________ geoinformatics ctu fce 2011 5 multiple visualization web approach for cultural heritage objects dante abate1, graziano furini1, silvio migliori2, samuele pierattini1 1enea research centre, utict via martiri di montesole 4, bologna 40129, italy {dante.abate, graziano.furini, samuele.pierattini}@enea.it 2enea headquarters, utict lungotevere thaon de revel 76, rome 00196, italy silvio.migliori@enea.it keywords: laser scanning, 3d modeling, web visualization, hpc, quicktime vr object (qtvr) abstract: usually the diffusion and sharing of cultural heritage documented 3d models on the web are not first of concern for scholars due to the fear of losing the intellectual property related to them. sometimes the interaction and navigation of virtual objects via the world wide web is also problematic due to their dimension (number of triangles), when high-definition has to be preserved. in this paper we propose a mash up methodology, for a multiple approach to visualize 3d models over the internet. after the digitization of a marble statue placed in the medieval museum of the city of bologna, according to the well known 3d pipeline (from the laser scan survey to the texturing process), we assembled together different solutions for sharing the model on the web. 1. introduction 1.1 content today it should be possible for an end-user to take advantage of web resources to view high-definition 3d models of cultural heritage artifacts and monuments. ideally, this should be possible online and in real time without a browser plug-in. but, to date, this goal has not been reached. nowadays, to distribute a high-definition 3d model on the network means giving the model itself to the user who needs to download the whole file, has sufficiently powerful hardware, a high speed connection, and installs, if available, the application for display, or reaches a compromise solution where the resolution during navigation and interaction with the virtual object is low and the rendering of the image is blurred and fuzzy [1,2]. however, different resources and techniques allow visualizing 3d models on the web. the solutions proposed here are suitable for different users and different goals. 1.2 3d modeling the first step of the process has been the digitization of a statue, probably a base of an “acquasantiera” (the basin where the holy water is usually stored), today placed in the medieval museum of the city of bologna [3]. the equipment used was the next engine desktop 3d scanner mounted over a tripod (figure 1). the scan data have been processed in meshlab v.1.3.0 open source software developed by the visit computing lab, isti, italian national research council [4]. a wide range of filters were applied to clean and repair each range map. afterward a semiautomatic approach was used for the alignment of all the sixty range maps in order to transform them in the same coordinate system using an icp (iterative closest point) algorithm. finally the poisson surface reconstruction was applied. the poisson surface reconstruction can reconstruct high quality surfaces through a global optimization. the model was texturized using the uv modifier of 3d studio max 9 software. this method uses uv coordinates to attach texture to a model. after all the post-processing steps the 3d high-definition model of the “acquasantiera” consists of 6 million triangles plus the texture information (figure 2). although it was possible to decimate the model significantly, the largest amount of triangles was preserved in order to save the high quality and details of the object. ________________________________________________________________________________ geoinformatics ctu fce 2011 6 figure 1: laser scanner survey figure 2: virtual high-definition 3d model 2. output the solutions made available to the final user to visualize the model on the web are: high-definition remote rendering on a hpc infrastructure. the technology developed for the remote visualization (ark3d, the enea-grid infrastructure for the remote 3d) frees the user from the need of specific hardware and software resources, and protects the intellectual property related to the 3d model, since it is not downloaded locally. the user interacts with the virtual environment using the remote hardware and software resources [5]. 3d no plugins (3dnp) is a 3d viewer that doesn't need browser plug-in, only javascript [6]. it requires a series of images which can be acquired either through a digital camera or by software when the object has been previously digitized. quicktime vr object (qtvr). it allows the creation and viewing of photographically captured objects through images taken at multiple viewing angles. it functions for the standalone quicktime player, as well as for the quicktime web browser plug-in [7]. virtual reality mark up language (vrml) file format for model download. 2.1 high definition remote rendering on a hpc infrastructure the first possibility made available to visualize the “acquasantiera” on the world wide web was a remote rendering approach exploiting the ark3d architecture. the study performed by enea, within the project cresco, concerns the implementation of a hardware and software architecture (ark3d) that allows remote access to a repository of three-dimensional models (high resolution, multi-disciplinary, available via the internet, and provided and uploaded by registered users). the technology developed for the remote visualization frees the user from the need of specific hardware and software resources, and protects the intellectual property related to the 3d model, since it is not downloaded locally. the user interacts with the high-definition virtual environment using the remote hardware and software resources (remote 3d) (figure3). this project uses the ict infrastructure of enea-grid and in particular the graphic cluster built up for cresco project and available at the enea centre of portici (napoli, italy). the 3d models can be uploaded by users, due to registration and verification of the content. the database is queried via the web by free search keywords. the result will contain the textual data available, and a link to display a high-resolution threedimensional version of the model (3d remote). the remote 3d is done using a dedicated graphic cluster, which guarantees the protection of the data. the whole architecture is scalable, both in the number of models uploaded and in ________________________________________________________________________________ geoinformatics ctu fce 2011 7 the number of simultaneous users logging in. the database is queried via a web page. the result contains all the available documentation on selected models. for non-registered users, together with the metadata, there will be a link to images (screenshots) of the model. for registered users, in addition to the documentation, there will be a link to run the remote applications which allows the visualization of three dimensional models. the display can be made through any graphics application, both proprietary and open source that uses opengl on linux operating systems. within this project the standard viewer was implemented using open scene graph libraries [8]. in essence the application runs on the remote machine (server), using the hardware resources including the graphics card for rendering (remote rendering). nowadays, the graphic cluster dedicated to the project consists of 4 workstations with amd dual-core processors, 16gb of ram and a nvidia quadro fx. the user, through the dedicated web page, has to run a java applet which installs and automatically configures the client to access the graphics application. the user is not required to know any configuration features or install other kinds of applications. between the client and the server the data transfer is exclusively represented by a stream of compressed images, generated remotely by the application, together with the functions of keyboard and mouse interactions. operations delegated to the client are limited to decompression of the images and keyboard and mouse input management. with this kind of technology the user can utilize hardware tools such as a netbook or pda, even with the limited bandwidth connection, to access the platform. figure 3: remote 3d rendering based on cresco graphic section 2.2 3d no plugins (3dnp) 3d no plugins is a java script able to upload an image series on a website and to simulate a 3d view, showing the appropriate image according to the user‟s mouse movement. 3dnp is licensed under gnu public license (gpl) and is therefore completely free. in contrast with panoramas pictures, which are captured from one location looking out at various angles, objects are captured from many locations pointing in toward the same central object. the simplest type of object vrs to capture is single row, typically captured around the equator of an object. capturing a multi -row object movie requires a more elaborate setup for capturing images, because the camera must be tilted above and below the equator of the object at several tilt angles. for this study the image series were obtained by exploiting a python script for blender, the open source 3d modeling software. this script is useful for quickly creating the images for the java script. three hundred twenty-four images were rendered. once the script is running, the 3d model previously created will appear inside a sphere formed by some vertices (figure 4). these vertices represent the points where the blender camera will be virtually placed so it can render the model from different points of view. four different parameters can be modified in order to set the virtual sphere around the 3d model. level: sets the horizontal subdivision degrees: sets the vertical subdivision cameraboss: define the size capbuffer: define how flat ________________________________________________________________________________ geoinformatics ctu fce 2011 8 by increasing the “level” and “degrees” values the script will produce more rendered images. the result will be a more realistic and smooth interaction with the 3d model on the web site where the object will be embedded (figure 5) . naturally more images will result in a larger computing time. figure 4: python script for blender interface figure 5: 3d no plugins web page embedded 2.3 quicktime vr object (qtvr) the three hundred twenty-four images produced with the python script for blender were used to create interactive quicktime vr objects (qtvr object). quicktime virtual reality is an extension of the quicktime technology developed by apple computer, inc. that allows viewers to interactively explore and examine photo realistic, threedimensional objects and virtual worlds. while the 3d no plugins will be embedded into the web page, the quicktime object movie file (extension .mov) will be opened by apple's movie player. one of the images will be displayed on the screen according to the position of the mouse cursor. then, when moving the mouse, the images will change one after another continuously so as to give a movement like an animation (figure 6). figure 6: quicktime movie player ________________________________________________________________________________ geoinformatics ctu fce 2011 9 the viewer will interact and navigate the vr object using conventional computer input devices (such as the mouse, trackball, track pad or keyboard) to change the displayed image via the quicktime vr movie controller. together with the traditional orbit tool the user will be able to experience also the magnifier tool. however, an image too magnified could result in a loss of definition and quality. to create the qtvr object, the vr worx suite has been used. the vr worx gives you the ability to generate qtvr panoramic movies, object movies and multi-node scenes. 2.4 downloadable file formats eventually the original 3d model was also converted in vrml file to allow the final user to visualize it through a standard vrml player locally with full interaction. two vrml files, both with texture information attached, were created. a high-definition model of more than 6 million polygons (490 mb) and a low-definition model of 250 thousand polygons (32 mb). however, this approach assumes: the necessity to download the 3d model itself from the web and consequently the need of a high bandwidth connection; the installation of a vrml viewer; the loss of control on the 3d model and on its intellectual rights by the former owner when external users get in possession of it. 3. conclusions the parameters intended to be preserved for this study were: real time interaction; definition/quality; intellectual rights attached to the model. aside from the remote rendering technique, all other approaches showed a lack in definition, interaction and/or intellectual rights preservation, separately or together. indeed, both the 3d no plugins and qtvr object show a loss in: definition/quality; interaction; control over the model intellectual rights (the series of images will be downloaded on the user‟s pc cache). on the other hand, locally downloading the complete 3d model (vrml) will guarantee full interaction and highdefinition even if the final user has to install the proper visualization software and own the needed hardware resources. only the remote rendering approach on the hpc infrastructure (ark3d) is able to preserve all the values together at once. in conclusion the choice of the correct visualization approach definitely belongs to the final goal of the user‟s project. 4. references [1] koller. d, turitzin m., levoy m., tarini m., croccia g., cignoni, p., scopigno r., protected interactive 3d graphics via remote rendering, proceedings of the 31st international conference on computer graphics and interactive techniques, siggraph 2004, los angeles, august 8-12 [2] koller d., levoy m., protecting 3d graphics content, communications of the acm, 48(6):74-80, june 2005 [3] abate d., ciavarella r., furini g., guarnieri g., migliori s., pierattini s., 3d modeling and remote rendering technique of a high definition cultural heritage artefact, procedia computer science volume3, 2011,pages 848-852, world conference on information technology [4] http://meshlab.sourceforge.net/ [5] https://www.ark3d.enea.it/home.html [6] http://www.thoro.de/page/3dnp-introduction-en [7] http://www.apple.com/quicktime/download [8] abate d., ciavarella r., frischer b., furini g., guarnieri g., migliori s., pierattini s., 3dws — 3d web service project, proceedings of computer applications and quantitative methods in archaeology, caa 2010, granada, spain 6-9 april 2010 (in printing). geoinformatics fce ctu 12, 2014 48 deformation measurements of gabion walls using image based modeling marek fraštia, marián mar�iš, ondrej trhan slovak university of technology, faculty of civil engineering, radlinského 11, 81368 bratislava, slovakia, e-mail: marek.frastia@stuba.sk, marian.marcis@stuba.sk, ondrej.trhan@stuba.sk abstract the image based modeling finds use in applications where it is necessary to reconstruct the 3d surface of the observed object with a high level of detail. previous experiments show relatively high variability of the results depending on the camera type used, the processing software, or the process evaluation. the authors tested the method of sfm (structure from motion) to determine the stability of gabion walls. the results of photogrammetric measurements were compared to precise geodetic point measurements. key words: image based modeling, deformations of gabion walls, structure from motion. 1. introduction image based modeling is very actual technology in this days. thanks to the high degree of automation in computer vision and sfm ([1], [4], [6]), it is possible to achieve fast and quality results of the imaged objects. there is a great variety of applications for image based modeling using the sfm technology ([2]). it is possible to produce quality models of objects of different dimensions, from few milimeters to tens and hundreds of meters, mostly in fields like archeology, cultural heritage, geology, mining and aerial mapping. but there are always some requirements to meet, if the desired accuracy should be ensured. the most important is the irregular random texture of the object's surface. this requirement is very easy to fulfill in the case of natural stone surfaces like a gabion wall is. the other requirements are related to the camera configuration in relationship to the object. the resulted accuracy of generated 3d surface is around 0.5 2 pixels ([5]), depending on the type of the surface (higher accuracy on flat surfaces, lower in the presence of sharp details). 2. object of measurement gabion wall has been the object of measurement (figure 1). it is located on the r1 highway in the lehota crossroad. the left side of the wall is 90 m long, the supporting concrete wall of the bridge is 37 m long and the right side of the gabion wall is 85 m long. the wall is 7.4 m high in the left side and 5.8 high in the right side. the whole building construction has arc shape. figure 1: the ground plan (up) and the front view (down) of the observed gabion wall strip 1-2 strip 3-4 strip 5-6 strip 7-8 +x +y fraštia, m. et al: deformation measurements of gabion walls using image … geoinformatics fce ctu 12, 2014 49 there are changes in high and position of 10-100 mm assumed created by the influence of the subsidence of the building structure and the weather conditions. quarterly intervals of surveying were proposed with the 3 mm required accuracy of the height and position of the observed points. the observed points were set in 4 pairs of steel strips (figure 1) firmly fixed in the wall and signalized by reflective foils (figure 2) for precise measurement of the distances. figure 2: the observed points (right) and the pairs of steel strips (left) we proposed to expand the mentioned geodetic point measurement with the unselective area measurement using the technology of image based modeling. the results of such measurement are likely to the technology of terrestrial laser scanning point cloud with high geometrical resolution, which documents not only the position of the selected points but also the whole surface of the observed object. the purpose of this method is to document the changes specifically in the direction perpendicular to the wall (in x axis direction). 3. the methodology of measurement and the used equipment geodetic measurements of observed and reference points was realized by the spatial polar method using instrument the leica ts30 (total station) with following accuracy characteristics: the mean error of angular measurement 0.5'' and of distance measurement on the reflective foils 1mm + 1ppm. the reference network is created by 12 points signalized with the reflective foils and installed on the surrounding objects (lamps, bridges, crash barriers, wells and portals). the coordinates and the heights were determined in a local geodetic network with mean errors smaller than myx � 2 mm, mh � 2 mm. the stability of the network points is checked in every epoch of the measurement. every pair of the strips of the observed points is measured from separate free standpoint to ensure that the direction of measurement is as perpendicular as possible to the reflective foil and to achieve the highest possible accuracy of the measured distance. the mean errors of observed points in the position and height do not exceed myx = 3 mm, mh = 3 mm. overall there is 85 observed points (about 10 on each strip). the photogrammetric methods allowed not only the measurement of observed points but also the complete surface scanning of the whole object. some of observed points were selected as control points (cp), the remaining points served as check points (chp) since their coordinates are known from the geodetic measurement. there were 2 cameras used for taking the images: the 33 mp middle format digital back leaf aptus ii-7 and the 24 mp high end compact sony nex-7 with interchangeable lenses (tab. 1). imagery was realized from the distance of 11 meters from the opposite side of the road (from the ground level) and the left side of the wall was photographed additionally from the top level of the opposite wall (figure 3). after all the images of the left side of the wall created 2 image strips, the concrete wall and the right side of gabion wall are displayed only on 1 strip. in one epoch there were together 100 images with the phaseone camera and 91 images with sony camera taken. the longitudinal image overlap was about 70%, transverse 100%, base to distance ratio about 0.27 (1:3.7). fraštia, m. et al: deformation measurements of gabion walls using image … geoinformatics fce ctu 12, 2014 50 table 1: technical specifications of used cameras phaseone 645 (body) – leaf aptus ii-7 (digital back) number of pixels 33 000 000 data format jpeg size of the ccd sensor (36x48) mm2 image scale 1:245 (gsd1=1.7 mm) size of one pixel 7.02 �m lenses phaseone f2.8 resolution 6666 x 4992 focal length f = 45 mm sony nex-7 number of pixels 24 000 000 data format jpeg size of the ccd sensor (24x16) mm2 image scale 1:550 (gsd1=2.3 mm) size of one pixel 4.2 �m lenses e f2.8 resolution 6000 x 4000 focal length f = 20 mm 1gsd – ground sample distance is the size of pixel on the object`s surface figure 3: taking of the images in 2 strips 4. results of the measurements the results of geodetic measurements by the total station between epochs 0 (14/10/2013) and 1 (12/12/2013) realized by company geosys s.r.o. indicate the stability of the left side of the gabion wall and the concrete bridge and the displacements of the observed points stabilized in the right side of the gabion wall up to 7 mm in the direction to the road. these values are yet not reliably proving the displacement. the height changes are minimal in values of ± 2 mm. the measurement of observed points for the purpose of photogrammetric processing wasn't able to synchronize with above mentioned geodetic measurements, therefore it is not possible to correlate them. our epochs were realized on the 23/11/2013 and 22/01/2014. the statistic of differences (tab. 2) on individual pairs of strips implies small changes in individual axes, under the border of detectability of displacements. in terms of the changes of the surface the values of changes in x direction are substantial for us, i.e. the changes in the direction perpendicular to the wall. the highest changes occurred on the 5-6 and 7-8 strips, in the right side of the gabion wall in the direction to the road. photogrammetric processing has been realized in the system agisoft photoscan professional, which can generate very detailed georeferenced 3d digital models and orthophotomosaics with high degree of automation ([7]). after the automatic orientation of the images including camera calibration there is necessary to manually measure the cps (if the coded targets are not available). any distortions of the model are caused by the various systematic effects (e.g. lens distortion), which the images contains, and by the number of used images. the more images in the strip used and larger deviations are from exact central projection of images, the larger deformations resulted model will content. these deformations are effectively eliminated by the properly chosen number and localization of cps. by increasing the number of cps we increase the geometric quality of the model on the one hand, but time and cost will rise on the other hand. for the purpose of evaluating the quality of the model, we tested the deviations for each geodetically measured point fraštia, m. et al: deformation measurements of gabion walls using image … geoinformatics fce ctu 12, 2014 51 (check point) by choosing the different ways of fitting for both cameras. scheme of the cps layout illustrates figure 4. table 2: the statistic of the coordinate differences between the 23/11/2013 and 22/01/2014 epochs. displacements [mm] y x z strip 1-2 16 points max. 1.0 1.7 1.6 min. 0.0 1.1 0.7 arithmetical average 0.4 1.4 1.2 strip 3-4 25 points max. 0.8 -0.1 2.1 min. 0.0 -1.3 1.1 arithmetical average 0.2 -0.5 1.6 strip 5-6 22 points max. 2.4 -1.0 0.1 min. 0.0 -3.7 -2.2 arithmetical average 0.7 -2.6 -0.8 strip 7-8 22 points max. 1.4 -1.0 2.1 min. -0.4 -2.7 0.5 arithmetical average 0.5 -1.9 1.4 figure 4: scheme of layout of control points figure 5: level of detail of cloud of points shown as shaded cloud of points, detail from left: low, medium, high, ultra high (down) 1 2 3 5 4 6 fraštia, m. et al: deformation measurements of gabion walls using image … geoinformatics fce ctu 12, 2014 52 table 3: differences between geodetic and photogrammetric coordinates camera nb. of strips differences between geodetic and photogrammetric coordinates y [mm] x [mm] z [mm] phase one 1 strip 4 cp 81 chp1 max 2.5 0.4 0.6 min -33.8 -226.0 -20,3 quadratic mean 17.2 147.9 13.2 phase one 2 strips 4 cp 81 chp1 max 1.2 9.3 0.5 min -8.8 -2.6 -3.7 quadratic mean 4.8 3.9 1.9 5 cp 80 chp1 max 1.8 6.7 1.1 min -2.4 -3.1 -3.1 quadratic mean 0.9 2.5 1.0 6 cp 79 chp1 max 1.7 2.7 1.6 min -2.4 -3.0 -2.2 quadratic mean 0.8 1.1 0.7 sony nex-7 2 strips 4 cp 81 chp1 max 3.1 2.3 3.1 min -10.7 -13.0 -2.8 quadratic mean 5.0 5.3 1.5 5 cp 80 chp1 max 7.6 5.3 1.0 min -1.2 -4.6 -3.8 quadratic mean 3.2 2.3 1.4 6 cp 79 chp1 max 1.9 3.0 1.7 min -1.9 -4.6 -1.9 quadratic mean 0.9 1.8 0.7 1number of check points, from which is computed a statistics the above table can be summarized in the following conclusions and recommendations: i. inappropriate imagery and processing data can lead to a deformation of the model up to 2 decimal places worse than can be achieved. these deformations may be undetected in the standard visual and statistic controls. ii. if it is possible, we should take images in 2 (or more) parallel strips. iii. choose a lens with smaller distortion. iv. choose a pair of cps one above the other in horizontal intervals of every 10 images. v. use some pairs as check points in the processing. vi. set accuracy of cps as fixed if we don´t have doubts about their high quality of determination. vii. finely adjust the weights for the tie points sensitive (case by case) according to deviations on the control points. viii. for the generation of the surface points, we can use the check points as the control points and fix them (mxyz = 0). with the compliance of these principles it is realistic to achieve the accuracy of the point cloud 2 pixels in the photographing direction, i.e. perpendicular to the object. in our case it is 3.4 mm (phaseone camera), respectively 4.6 mm (sony camera). in the parallel plane with the image plane, i.e. in the plane of the wall the real accuracy of the model is about 1 pixel (1.7 respectively 2.3 mm) ([3]). the level of detail (figure 5) can be chosen within the meaning of 1 point on 1 pixel or in lower resolution, i.e. 1 point on every 2 pixels, 1 point on every 4 pixels etc. the number of points and fraštia, m. et al: deformation measurements of gabion walls using image … geoinformatics fce ctu 12, 2014 53 the detail of the surface (the distance between adjacent points) at various settings for camera phase one are documented in tab. 4: table 4: impact of the level of the processing on the number of points and the density of point cloud of whole wall (100 images) level of processing (process pixel every) ultra high 1. high 2. medium 4. low 8. lowest 16. step gsd [mm] 1.7 3.4 7 14 28 number of points/m2 444 000 111 000 20 400 5 100 1 200 total number of points [x106] 288 72 18 4.5 1.1 the result of the epoch comparison of the wall surfaces in the x-axis (direction approximately perpendicular to the wall) is a colored differential map (figure 6). compared were these epochs: 23/11/2013 – 22/01/2014, both from images from camera phase one. from figure 6 we can see, that differences between epochs in left part, on concrete wall and on a part of right wall are in an interval of ±3 mm, thus in measurement accuracy. for about 2/3 of the right part of the gabion wall are the differences represented by values of up to + 9 mm, where the sign “+” in this instance is the direction of displacement to the road. even this value doesn´t represent a demonstrable displacement, because with the precision of the determination of the surface in a direction perpendicular to the surface of 3.4 mm we reliably detect a change in this direction with a probability of 95% for values above 12 mm. at lower processing resolution, the problem can be in making of 3d model due to wires of gabion mesh. the mesh is more or less in front of stones and that fact caused lower accuracy of model surface. on the other hand, the higher resolution, the more points is generated and more hardware problems can occur. figure 6: difference map between epochs 23/11/2013 – 22/01/2014, green: ±3 mm, yellow: +(39) mm 4. conclusion this contribution was aimed to show the possibilities of photogrammetric documentation of the gabion walls for the purposes of creation of detailed 3d models and the measurements of displacements. the main advantages are the simplicity of the data acquisition, surface documentation of object and the sufficient accuracy of the results. on the other hand, the big sensitivity of the results to the method of field data collection and the data processing requires adequate knowledge and experience of human operator, which guarantee the quality of the results. acknowledgements the authors of the article thank to geosys s.r.o. for the offer of partnership in this project as well as for the provided technical information, documents and assistance in solving the project. the paper is solved within the project vega no. 1/0133/14. gabion concrete wall gabion fraštia, m. et al: deformation measurements of gabion walls using image … geoinformatics fce ctu 12, 2014 54 references [1] cipolla, r.: structure from motion. [online] 2008. [cited 10.10.2012]. available from [2] doneus, m., verhoeven, g., fera, m., briese, ch., kucera, m., neubauer, w.: from deposit to point cloud: a study of low-cost computer vision approaches for the straightforward documentation of archaeological excavations. geoinformatics (faculty of civil engineering, czech technical university in prague). 2011, 6. pp.81-88, isbn 978-80-010-4885-6. [3] fraštia, m.: laser versus image scanning of stone massifs. mineralia slovaca. issn 0369-2086. vol. 44, no. 2 (2012), s. 177-184 [4] junghaus, o.: studies on the photogrammetric acquisition of point clouds with the photomodeler scanner system, bachelor thesis, bochum university of applied sciences, 2010, [online]. [cited 22.02.2013] available from [5] mar iš, m.: protection and restoration of cultural heritage using the methods of digital photogrammetry, phd thesis, slovak university of technology, faculty of civil engineering, department of surveying, 2013. [6] pavelka, k., �ezní ek, j.: new low-cost automated processing of digital photos for documenation and visualisation of the cultural heritage. geoinformatics [online]. 2011, vol. 6, no. 6, p. 245-258. available from . issn 1802-2669. [7] semyonov, d. algorithms used in photoscan, agisoft photoscan forum, 03.05.2011 [cited 03.10.2012] ________________________________________________________________________________ geoinformatics ctu fce 2011 48 learning from the building: direct sources for the preservation project. the experience of besozzo's town hall (varese, italy) susanna bortolotto1, elisabetta ciocchini1, andrea frigo1, andrea garzulino1, raffaella simonelli1, fabio zangheri1 1politecnico di milano, dipartimento di progettazione dell‟architettura dpa, laboratorio di diagnostica per la conservazione e il riuso del costruito via durando 10, milano, italy, susanna.bortolotto@polimi.it keywords: knowledge, architecture, surveying, stratigraphy analysis, documentation, conservation abstract: the town hall of besozzo (varese, italy) is located in the city centre of the village and its first construction phase is dated back to the xiv-xv century. it shows a complex palimpsest which is the result of the numerous transformations occurred during its life: enlargements, super elevations, demolitions, inner spaces subdivisions and use changes. currently a project has been issued for the reuse of the building which assigns new spaces for the town offices to the northern wing recently acquired. the aim of the research was to provide a diagnostic insight, useful for the development of the conservation project which will necessarily take into account the multitude of values registered on the building. owing to a lack of meaningful archival documentation, the elevation’s stratigraphic reading and the methods for dating historical buildings proved to be an invaluable resource for the comprehension of the building’s transformations. cross-referencing readings of indirect sources carried on the building with the results of the in-depth analysis made it possible to rebuild the growth of the structure from its origin to the present days. such analysis includes: geometric survey, photographic rectifications of facade and inner sections, non-destructive diagnostic investigations, bricks, mortar and plaster chemical-physical analysis, mensiochronology, study of the building techniques and chronotypology which is a stylistic analysis performed both on the constructive (apertures) and decorative (shelves, graffiti, colourings traces) architectural elements. blending the results of these dating techniques produced the complexity of the stratigraphic reading which has been conveyed with adequate hatching on the rectified images (u.s. – stratigraphic unity) while schematic 3d reconstructions exemplify the chronological sequence of the building activities. individuation and comprehension of the building constructive phases made also possible to understand which were the different uses of each room inside this domestic architecture thus providing the client and the bodies in charge of protection with valuable data for the preservation project. 1. introduction archaeological diagnosis, under good conditions, is a method to discover the physical history of any type of building, based on the materials used in construction (stone and bricks, in the case of the town hall of besozzo), techniques, the „architectural forms‟ and the vertical, horizontal and surface stratification. archaeological reading not as a mandatory practice, but a valuable tool to consider other possible „points of view‟, to confirm otherwise „hypothetical‟ assessments and to justify further investigations, as well as to rediscover new stability or instability within the building itself. a tool that can draw attention to the „complexity‟ of the evolutionary processes of construction, respecting the real physical structures and material components that constitute an unrepeatable specific context, for in-depth knowledge of „the whole story‟, in order to optimise specific methods of „care‟ and conservation. for the case study of the town hall of besozzo, given the fragmentary nature of the information, it was decided to use dating methods for historical buildings (stratigraphic survey of the elevations, identification of masonry techniques and their chronotypology, mensiochronological analysis of the bricks, the technotypology and chronotypology of the apertures) in order to collect more information and to be able to proceed with a project for conservation and reuse that is aware of the materia signata. 2. indirect research: historical and documentary research an architectural conservation project presupposes, between the different phases of investigating the building, preliminary historical research in order to clarify the key construction phases of the building, the changes due to different ways of living over time and the maintenance and restoration required to arrest natural and anthropogenic ________________________________________________________________________________ geoinformatics ctu fce 2011 49 degradation. the data resulting from the historical research phase, taken into consideration along with information from other preliminary phases of investigation, helps to define choices for conservation and reuse of the building. the historical analysis of the town hall of besozzo was first set out in the collection of data from local historical publications and subsequently, in research into unpublished archival documents. the tracked down documents, mostly cadastral or notarial in nature, written on behalf of the several families that owned and lived in the grounds of the town hall, have allowed the identification of the key phases of construction and more precise dating of some modifications that took place between the fourteenth and twenty-first century, which were also confirmed by direct „reading‟ of the building. 3. geometric survey and photographic rectification the geometric survey was conducted with a topographic approach to which the necessary detailed surveys were added, and carried out entirely by direct survey. this survey, as well as geometric quantification, also improved the photographic aspect, necessary for the subsequent processing of stratigraphic data. using data from the topographic campaign led to the realisation of rectified digital photography of the internal frontages of all the places surveyed. this choice turned out to be the most suitable as it allowed any comparison to be made, very quickly, between different parts of the building, maintaining connections between architectural elements, even when belonging to non-adjoining areas. this was an operation that proved useful in the understanding of the architectural works and artefacts and difficult in situ implementation. 4. direct research 4.1 stratigraphic survey of the elevations the method of archaeological reading borrowed from real archaeological science provides excellent results in the architectural field as well. this method of application – visual and non-destructive – favours a reading of direct sources, that is, a reading of the information contained on the walls themselves. furthermore it provides more in-depth knowledge of certain aspects, suggesting from time to time investigation routes that avoid the irreversible loss of authenticity of the building, in full respect for the stratified „material culture‟ that has been passed down to us. it is implemented through stratigraphic survey, a type of graphic survey that records the relevant chronologies, recognisable in the constructed parts. the identification of different stratigraphic units (s.u.) and architectural elements (a.e.), along with their exact number, allows us not only to identify every construction activity surveyed, but also to subsequently relate them, until we can draw up the reading by phase (figure 1). the quality of the results obtained is confirmed by comparison with the stratigraphic data collected and the relevant dating obtained from mensiochronological reading, from analysis of masonry techniques, chronotypology and the results of the historical archival research. the data obtained from the physicochemical characterisation of the materials (bricks and mortar) is also evaluated. 4.2 mensiochronology the relationship between the measurements of a brick and its period of construction and implementation is now used as a valuable methodological tool for archaeological investigation into construction, in order to verify and support other dating methods of historical construction. it was possible to see how the dimensions of the bricks undergo apparently random and unimportant changes over time depending on region of origin, period of production and di fferent socioeconomic conditions. there were no case studies to refer to for the besozzo area that could provide useful information about such changes over time. despite the difficulty in choosing the survey samples, due to the fact that a large part of the building is plastered both internally and externally and that the prevailing construction technique was „masonry in mixed stone‟, it was possible to conduct a mensiochronological study with the goal of verifying any differences between stratigraphic units, the quality of the materials used and geometric and technical characteristics. in the case of the besozzo town hall the mensiochronological reading was carried out on the extrados of the vault built between the basement and ground floor and on non-plastered wall surfaces. in this case, the mensiochronological reading was prepared to suppose a relative dating and to verify that the vault could be considered the result of a single construction activity. the barrel vault with lunettes, resulting from the intersection between a barrel vault (set in a rectangular room) and other vaults and made to allow the opening of doorways and windows in the basement, is very thin and is built from reused bricks characterised by reduced thickness and length, no doubt different from those used for the walls. it was then supposed that the creation of the vaulted room could be ascribed to a single phase of construction (seventeenth – eighteenth century) which took the pre-existing geometry (position of windows on the south-west façade) into account for the construction of the abutments of the vault and lunettes and which was adapted for the new openings on the north-east façade. it was then possible to date the formation of the doors and windows, whose abutments are characterised by bricks of equal dimension, despite the small number of bricks that could be sampled due to the state of conservation and the presence of traces of plaster. finally, the chemical-physical analysis of the bricks and mortar ________________________________________________________________________________ geoinformatics ctu fce 2011 50 provided further verification of the quality of the results. this was in order to provide a solid „foundation‟ for the formulation of a more aware overall chronological framework of the construction of the architectural palimpsest being studied. figure 1: longitudinal section (bb) geometric survey, photographic rectification and stratigraphic reading 4.3 technotypology and chronotypology of the apertures the chronotypology and reading of the masonry techniques or technotypology made it possible to date the historical building relatively. such direct sources in absence, as is the case at besozzo, of a local chronotypological curve or catalogue of construction techniques pertaining to the geographical reference, may still be valid interpretations for ________________________________________________________________________________ geoinformatics ctu fce 2011 51 dating the different parts of the architectural structures. identification of different types of doors and windows and study of different construction techniques allows us to discover record and decode written information on the matter (figure 2). the files on the doors and windows of the town hall of besozzo include a series of information (size of the perforated patterns, any typological, formal or material differences) derived initially from geometric surveys and photographic rectification of the façades, implemented from data retrieved directly in situ during targeted inspections. summary files record construction techniques, materials, finishing and information on the presence of shutters, grilles, window shutters, doorsteps, windowsills, railings and their characteristics. all this data was of great use in subsequent readings comparing typological classes and construction activity, as well as chronological readings associated with other analyses performed at the besozzo town hall. as well as supporting the chronology of the architectural structure, the typological analysis of the doors and windows provided information on the way that the internal rooms were used and the role of light within this domestic architecture. doors and windows with shutters and screens were used for defence, lighting, ventilation and heating. this is more evident to the south, where „view‟ windows (lancet windows) and „light‟ windows (located on the ground floor, in a high position almost directly below the floor above) can be seen, small openings that allowed the lighting of the entrance halls with the door closed. the support of a „clothcovered‟ frame is also supposed, canvas waterproofed with linseed oil, turpentine or something else, reported by indirect (written and iconographic) sources: it should have been a frame positioned in the light towards the exterior, being either fixed or seasonal, or limited to the cold periods of the year. from „cloth coverings‟ the system may have moved on to subsequently use leaded glass on a fixed frame, perhaps made of metal, or glass doors mounted on opening doors and windows. as for the doors, on the interior it was necessary to have support for robust metal and wooden doors. on the southern façade two similar filled-in doors on the first floor lead to the hypothesis of the construction (after the construction of the lancet windows) of an external access walkway, dating back to the fifteenth-sixteenth centuries. figure 2: particular of a ogive window (xiv-xv century) on the south front 5. conclusions: the construction phases of the building the cross-referencing reading with the results ascribable to each survey conducted has allowed us to reconstruct the formation process of the town hall of besozzo. the vast complex is situated in besozzo (varese), an important village of ancient origins, site of a castle, home to da bexutio. the first phases of construction of the complex date back to the early fourteenth century (phase i), and constituted a series of buildings arranged in an l-shape around a courtyard and overlooking in part the village‟s main road which led west, in part over open spaces situated to the north. building b, the subject of this study, consisted of a basement room, overlooking the courtyard, perhaps destined for use as a cellar ________________________________________________________________________________ geoinformatics ctu fce 2011 52 or storeroom, covered with a wooden floor slab supported by stone corbels. between the fourteenth and fifteenth century (phase ii) the room, destined for new use as accommodation or a service room, underwent a first modification with the opening of two large windows on the southern façade and the reconstruction of the floor, placed at a higher altitude. during the fifteenth century (phase iii) the building was radically transformed with the addition of two floors, in conjunction with the partial expansion of the courtyard. the two large new rooms, located on the ground and first floors, were covered with wooden slabs supported by wooden brackets. the building was accessed through a small, low-vaulted door on the first floor. the new rooms were illuminated by five large lancet windows, three of which still exist. the shape of the windows, doors and wooden brackets (figure 3) allow us to date with some confidence the new construction to throughout the fifteenth century, although both the building traditions consolidated among local workers and the distance from milan and pavia, the major cultural centres of the duchy of milan, may postpone the date to within the first two decades of the sixteenth century. between the fifteenth and sixteenth century (phase iv), probable internal divisions made on the first floor called for the opening of two doors on the southern facade, reached by an access ladder to a wooden walkway, which no longer exists. during the sixteenth century (phase v) the level of the internal courtyard was probably raised, permanently reducing the space below, built during the first construction phase, to a cellar or storeroom. it is during this phase that the apertures of the space were filled in and the construction of the body of building d on the east side of the court took place. the change in the level of the floor of the court involved a series of changes: raising the ceiling of the ground floor, the walling-in of the old access and the opening in breach of a new entrance, of similar shape but larger in size. the coeval opening of a „light window‟ above the new access door allowed for the hypothesis of a closing system with locks made of wooden support boards for the lancet „view‟ windows. the entire southern façade was plastered and perhaps painted later, as proved by traces of blue and red and engraving with geometric patterns (figure 4). figure 3: a wooden bracket of xv-xvi century. figure 4: particular of geometric patterns in the ancient plaster. ________________________________________________________________________________ geoinformatics ctu fce 2011 53 figure 5: schematic representation of the building's historical evolution on the different levels. ________________________________________________________________________________ geoinformatics ctu fce 2011 54 between the seventeenth and eighteenth century (phase vi) the building passed hands from the castelbesozzi to the masnaghi family and was transformed into a genuine „nobleman‟s home‟. the ground floor room was decorated with a large framed fireplace, perhaps in macchiavecchia, removed after 2005. during the eighteenth century (phase vii) the masnaghi family filled in all the existing apertures on the southern façade of the building and opened doors and windows to the north. to the end of the eighteenth century the building, under the new ownership of mazzola and del vitto, was home to a workshop and was internally divided, on the first and ground floors, into rooms to rent, to which access was provided by a stone walkway. by 1816 (phase viii) further internal adaptations were made and, between 1816 and 1890 (phase ix), the complex was expanded by the construction, in the northwest corner of the courtyard overlooking the street, of a space used as shop/restaurant with an upper room and a porch with a shed, barn and hayloft above, pulled down in the late twentieth century. until the sixties (phase x) the building remained in the hands of del vitto with no further modifications. in 2006 it was sold to the town of besozzo and between 2009 and 2011 studies and initial restoration work began (figure 5). the reconstruction of the formation process of the building by its construction phases is closely tied to the concept of a layered palimpsest, understood as the unique, irreproducible deposit of material culture that will guide the future project of conservation and reuse. with respect to all of the analyses carried out such a project will therefore be more attentive to the differences than to similarities and claim awareness of the singularity, specificity and irreproducibility of every sign of time and man, read on the materia signata. 8. references [1] brunella, r.l.: frammenti di storia besozzese. brevi notizie preistoriche e storiche di besozzo e dintorni, besozzo, 1960. [2] boriani, m.(a cura di): patrimonio archeologico, progetto architettonico e urbano, firenze, 1997. [3] bortolotto, s.: il rilievo stratigrafico, in c. campanella (a cura di) “il rilievo degli edifici”, milano, 2004, pp.ř4115. [4] bortolotto, s., colla, c., mirandola, d., sponchioni, a.: palazzo cittadini stampa: role of stratigraphy and cinematic analysys in the knowledge of a mansonry building,, c. modena, p.b. lourenco, p. roca (a cura di), structural analysis of historical constructions. possibilities of numerical and experimental techniques, london, 2005, 1, 167-175. [5] bortolotto, s., campanella, c., tessoni, m., macchi, a.: methods for dating historical buildings and verticality control of the baronale palace at avio’s castle (tn), cipa: “international cooperation to save the world‟s cultural heritage”, torino, settembre 2005, 177-182. [6] parenti, r.: sulle possibilità di datazione e di classificazione delle murature, in r. francovich, r. parenti (a cura di), archeologia e restauro dei monumenti, firenze, 1988, 280-304. [7] gabrielli, f.: la “cronotipologia relativa” come metodo di analisi degli elevati: la facciata del palazzo pubblico di siena, archeologia dell‟architettura”, i, 1řř6, 17-40. [8] mannoni, t., milanese, m.: mensiocronologia, in r. francovich, r. parenti (a cura di), archeologia e restauro dei monumenti, firenze, 1988, 383-402. geometric documentation of underwater archaeological sites eleni diamanti1, andreas georgopoulos1, fotini vlachaki2 1laboratory of photogrammetry, national technical university of athens drag@central.ntua.gr 2hellenic institute of marine archaeology ienae@otenet.gr abstract photogrammetry has often been the most preferable method for the geometric documentation of monuments, especially in cases of highly complex objects, of high accuracy and quality requirements and, of course, budget, time or accessibility limitations. such limitations, requirements and complexities are undoubtedly features of the highly challenging task of surveying an underwater archaeological site. this paper is focused on the case of a hellenistic shipwreck found in greece at the southern euboean gulf, 40-47 meters below the sea surface. underwater photogrammetry was chosen as the ideal solution for the detailed and accurate mapping of a shipwreck located in an environment with limited accessibility. there are time limitations when diving at these depths so it is essential that the data collection time is kept as short as possible. this makes custom surveying techniques rather impossible to apply. however, with the growing use of consumer cameras and photogrammetric software, this application is becoming easier, thus benefiting a wide variety of underwater sites. utilizing cameras for underwater photogrammetry though, poses some crucial modeling problems, due to the refraction effect and further additional parameters which have to be co-estimated [1]. the applied method involved an underwater calibration of the camera as well as conventional field survey measurements in order to establish a reference frame. the application of a three-dimensional trilateration using common tape measures was chosen for this reason. among the software that was used for surveying and photogrammetry processing, were site recorder se, eos systems photomodeler, zi’s ssk and rhinoceros. the underwater archaeological research at the southern euboean gulf is a continuing project carried out by the hellenic institute for marine archaeology (h.i.m.a.) in collaboration with the greek ephorate of underwater antiquities, under the direction of the archaeologist g.koutsouflakis. the geometric documentation of the shipwreck was the result of the collaboration between h.i.m.a. and the national technical university of athens. keywords: underwater photogrammetry, trilateration, underwater camera calibration, visualization, orthophotomosaics, 3d reconstruction, ancient shipwreck 1. introduction with 17,000 kilometers of coastline, equivalent to 25% of the total mediterranean coast, with almost 3,500 islands and at least 1,000 shipwrecks detected in the greek seas, greece is a country with one of the largest and perhaps the most important underwater archaeological geoinformatics fce ctu 11, 2013 37 diamanti, e. et al.: geometric documentation of underwater . . . heritage. ancient shipwrecks, submerged settlements or ancient harbors are housed for centuries in the greek seas. nevertheless, practical implication of theoretical and technological developments in the fields of surveying and photogrammetry for the geometric documentation of this underwater heritage is far behind the rapid developments and innovations that are applied when it comes to surveying ‘terrestrial’ monuments. this paper presents an effort to improve and experiment on the synergy of conventional surveying techniques, such as simple tape measurements and trilateration adjustments, with modern software and digital technology, in order to produce 3 dimensional reconstructions that can assist underwater archaeologists in reaching their scientific conclusions, through a geometrically accurate, documentation of the site and the excavation process, as well as to bring those who do not have the opportunity to access this submerged monument, on a ‘digital trip’ to an ancient shipwreck in deep waters. 1.1. description of the object the hellenistic shipwreck, whose case is examined in this paper, was found in 2006 in the northwest side of the island styronisi at southern euboean gulf, at a depth range of depth between 39 and 47 meters below the sea surface. the shipwreck dates back to the late hellenistic period (late 2nd to early half of the 1st century b.c.) and it is the only ancient shipwreck that was detected in southern euboean gulf, in such a relatively good condition. the dimensions of the exposed shipwreck are approximately 18 meters long and 7 meters wide. the cargo of the ship consists, mainly, of intact and broken amphorae, 90% of which are considered as brindisi type of amphorae. additionally, among the ship’s cargo interesting objects were found, such as parts of luxurious bronze furniture, bronze and steel spikes, a stone wash basin, parts of the harness of the ship and broken tiles. among the most important finds of the whole archaeological survey, is a small part of the dress of a natural size bronze statue and beneath the surface layer of sand, two parts of the wooden hull of the vessel, something very rare, since wood cannot be preserved for such a long time in underwater archaeological sites. since the discovery of the specific wreck is considered of great importance, h.i.m.a. and the supervising archaeologist intended to launch a systematic investigation and excavation of this monument. therefore, the detailed and accurate documentation of the site became an immediate priority. orthophotomosaics and 3d rendered models of the wreck were considered as the ideal products, in order to map the site in the condition that it was found and prior to excavation [2]. of course, when it comes to the excavation period, the requirements are increasing; daily recording of the excavation trenches, mapping of the 3d locations of the artifacts, 3d reconstruction of the shipwreck excavation, production of daily 2d plans or 3d measurement and modelling of finds are some of the needs that arise for the complete documentation of such a monument. 1.2. underwater surveying & underwater photogrammetry it is well-known that conventional mapping is a process subject to human error in underwater archaeology [3, 4, 5], while photogrammetry has long been a viable technique in such situations [6]. thus the main objective is to create a 3-dimensional model of the site using photogrammetry, which could be dynamically updated in the future according to the progress of the ongoing archaeological excavation. underwater photogrammetry clearly offers geoinformatics fce ctu 11, 2013 38 diamanti, e. et al.: geometric documentation of underwater . . . some advantages for the surveying of a submerged site thus overcoming difficulties such as limited on-site accessibility and non-destructive efficiency. on the other hand, some crucial and inevitable matters that have to be faced and co-estimated arise in such conditions: no operational control on data acquisition, low image quality caused by poor underwater lighting (e.g. variations of scattering or absorption of red wavelength especially in deep waters) even if artificial lighting is used, two-media (air-water) data collection, , significant diffusion that complicates the object recognition and the tie point measurements and, last but not least, control point establishment limitations as common tape measurements and 3d trilateration methods are perhaps the only plausible methods. despite all the difficulties encountered, photogrammetric software can be increasingly extended from land-based applications to underwater applications. 2. the geometric documentation of the hellenistic shipwreck in south euboea 2.1. establishing an underwater control points network one of the main tasks of the surveying procedure on site was to establish an underwater network of control points, which would be measured and calculated with tape trilateration adjustment. once the theoretical control network had been designed in terms of adequate geometry, i.e. widely dispersed control points and efficient stability of the points, it had to be set up on the site. it is at this point, where the problems associated with surveying under water start to affect the quality of the survey. the tape survey is dependent on the divers’ ability to install control points in geometrically correct positions of high rigidness, as well as to measure the distances between the points with sufficient accuracy. in this case, 20 control points were established. they were made of targets stuck on 10x10cm2 plexiglas tiles; fourteen of them, fitted onto 0.5m long metal rods, were inserted in sand as deep as possible (fig.1a) and the remaining six were fitted, with common tie-wraps, on the mouths of 6 intact amphorae (fig.1b). the tiles bearing the target points were also labeled with numbers, so that they could be easily identified by the divers. figure 1: (a) control points inserted in sand, (b) control points fitted on the mouth of amphorae the position of each control point was measured from at least 5 other control points of the geoinformatics fce ctu 11, 2013 39 diamanti, e. et al.: geometric documentation of underwater . . . network. from a total number of 119 measured distances, 82 were selected and adjusted using the site recorder se software through a large least-squares network adjustment. in order to obtain three-dimensional coordinates for a point, the minimum number of measurements is of course three. with three measurements, error or reliability cannot be estimated. therefore, at least 5 measurements from each point were taken, so that the accuracy of the coordinates of each control point could be estimated. the total rms of the trilateration adjustment resulted to 0.027m. 2.2. data acquisition for the optimal organization of the photogrammetric restitution, a photomosaic of the shipwreck was created as an approximate complete mapping of the site. the hugin open source software, a piece of software typically used for stitching and blending a series of photographs was employed. this software was developed by b. hartzler, an archaeologist and member of h.i.m.a. [7]. this first photomosaic proved to be a very useful tool for the photogrammetric image data acquisition that followed. figure 2: photo data capture for the image data acquisition, a sony dslr-a700 camera and an ikelite® protecting housing were available. the camera has a 12-mm lens and all individual images are of a resolution of 4272x2848 pixels. the diver-photographer “flew” over the wreck (figure 2), taking about 120 photos in 4 strips, with a 70-80% forward overlap and 50% side overlap. the physics of underwater light diffusion requires that images should be taken as close to the object as possible. images taken from longer distances have, as a result, much lower quality. a standard “flying height”, a strictly straight strip line and a satisfying overlap between images are definitely requirements of an optimal data acquisition for photogrammetric processing purposes. nevertheless, it seems to be a really challenging task when photographing under such conditions. in this case, the limited available time of the archaeological research, the increased difficulty to approach the site and the strong underwater currents did not favor the image acquisition process with the aforementioned requirements. geoinformatics fce ctu 11, 2013 40 diamanti, e. et al.: geometric documentation of underwater . . . 3. photogrammetric processing 3.1. underwater camera calibration the accurate 3d reconstruction, as well as the pose estimation of an object, from images, requires the thorough knowledge of the intrinsic camera parameters i.e. focal length, principal point’s coordinates and lens distortion. the underwater camera calibration problem has been treated in several ways so far. a ‘standard case’ of multimedia photogrammetry [8]: three media; an object in water, a sensor located in air and a transparent plane of the camera housing separating the object from the sensor. as far as using images for underwater surveying is concerned, there are generally two categories for approaching camera calibration; the first is based on dry camera calibration methods, where the intrinsic parameters of a camera immersed in water, or any other fluid, can be calculated from an air calibration, as long as the optical surface between the two fluids presents some simple geometrical properties [9]. the second approach, is not based on modeling any parameters that have to do with different media through mathematical models, but treats the system camera-housing (or airglass-water) as a unique system. the camera calibration in this project’s case is based on the second approach and was carried out using the photomodeler® calibration module. a sony dslr a700 camera, in a waterproof ikelite housing device, was immersed into water and a total number of 24 images of the board bearing the calibration pattern, i.e. a grid of specific dots (figures 3a, b, c) were taken. photomodeler, firstly, analyses each picture using a line interpolation algorithm to find and mark the dots and the 4 control points of the plane pattern [10]. seventeen (17) images were processed into an image processing software in such a way that the algorithm could detect only the coded dots-targets (figure 3b). the scale was constrained by the 4 known distances between the control points of the pattern. figure 3: (a) initial calibration image, (b) corrected calibration image, (c) camera positions. on completion of the calibration, the intrinsic parameters of the camera were determined, including the principal point’s coordinates, the radial distortion values and a focal length of 15.03mm. as far as the focal length is concerned, the ratio computed-to-nominal value (15,03mm/13mm) was found to be 1.16. when compared to the refractive index (1.34), it is obvious that underwater refraction is not the only parameter that has to be estimated, contrary to the dry camera calibration procedures. depth, temperature and salinity can be considered as unstable parameters affecting a typical photogrammetric camera calibration. in comparison to the refractive index, there are no mathematical models that can describe these parameters, thus not permitting the achievement of a reliable camera calibration procedure. eventually, it would be desirable, given an unrestricted underwater time, if a camera calibration could be completed in identical conditions to each archaeological photogrammetric dive. geoinformatics fce ctu 11, 2013 41 diamanti, e. et al.: geometric documentation of underwater . . . 3.2. photogrammetric bundle adjustment a matter of high importance is to have the acquired images prerectified before using them in the photogrammetric processing [11]. preprocessing adjustments include radiometric enhancement of images, i.e. brightness adjustment, contrast enhancement, edge enhancement, noise reduction etc. therefore, 46 images were selected and processed using adobe photoshop software, through ‘neutralize’, ‘brightness-contrast’ and ‘color balance’ commands (figures 4a and 4b). figure 4: (a) initial image, (b) color processed image once all 46 images were preprocessed, the photogrammetric adjustment was implemented using topcon’s imagestation® software. the block adjustment was done using 46 images at an approximate range of scale from 1:200 to 1:600, as the diver-photographer could not swim parallel to the site. twenty (20) points with known coordinates, as resulted from the trilateration adjustment method with site recorder, were recognized, marked on the images and utilized as control points. the a priori control point precision was set at σxy = 0.04m and σz = 0.07m. table 1 shows the results of the bundle adjustment. table 1: bundle adjustment results rmsxy (µm) 6.2 pixel size (µm) 5.8 rmsxy (m) 0.025 rmsz (m) 0.037 a fact which has to be stressed is that the block included images of regions of sandy seabed. this was a problem, as far as finding a sufficient number of common points between images, is concerned. to increase the number of common points, plexiglas strips with coded targets, similar to photomodeler® calibration targets, were positioned in those sandy areas. this method proved to be very helpful in the end, in the attempt to evaluate the accuracy of the final orthophotomosaic of the site, especially in sandy areas where the control point distribution could not be very dense. 3.3. orthophotomosaic of the shipwreck one of the main goals of the work was to produce an accurate and radiometrically correct orthophotomosaic of the entire area of the shipwreck. therefore, once the photogrammetric bundle adjustment of the block was completed, the extraction of a digital surface model geoinformatics fce ctu 11, 2013 42 diamanti, e. et al.: geometric documentation of underwater . . . figure 5: digital surface model of the site was initiated. approximately 54000 points and 2000 break lines were collected manually through stereoscopic vision of the oriented models, thus producing a dense dsm (figure 5). the attempt for automated dsm extraction failed, due mainly to problems connected with a weak image radiometry, e.g. similar tones especially due to the absorption of the red wavelength, repetitive features and poor textures like sandy areas, scale variations and large rotation differences between images and, finally, occlusions. as a result, the acquired data did not reach the classical image matching methods standards and a manual dsm extraction was the only solution. once the dsm was extracted, an orthophotomosaic of the shipwreck’s area, made out of 46 orthorectified images (figure 6), was produced using z/i´s imagestation orthopro® software. the final product was evaluated in two ways: 1. by comparing the control points network, as it resulted from the trilateration adjustment, with the orthophotomosaic 2. by measuring on the orthophotomosaic the known distances between the plexiglas strips of photomodeler coded targets. figure 6: orthophoto mapping of the shipwreck’s area with the use of dsm geoinformatics fce ctu 11, 2013 43 diamanti, e. et al.: geometric documentation of underwater . . . figure 7: (a) 2d scaled drawing of a brindisi amphora, (b) 3d revolved amphorae model, (c) 3d rendered model 3.4. 3d reconstruction of the shipwreck the ship’s cargo consisted mainly of amphorae of one type, i.e. brindisi amphorae. a 3d theoretical revolved model was obtained using 2d scaled drawing of one such amphora, which was taken off the site using the rhinoceros® software (figures 7a, 7b and 7c). due to the fact that most of the objects were partially visible on images or even broken, photogrammetric measurements were not enough for the complete 3d reconstruction of the site. the main reason for not using the already constructed and dense dsm for plotting each object is that it is a 2.5 d plan. this means that many objects’ attributes were not visible on images, thus hiding an important amount of information. therefore, characteristic features of each object were photogrammetrically measured, so that accurate shape, size and direction of the amphorae could be restored. the final 3d model is a combination of the theoretical models and the photogrammetrically measured attributes of the various finds (figure 8). the choice of attributes is based on measuring particular parts of the finds, i.e. rims, bodies, mouths or handles of each amphora, so that each object could be positioned as well as oriented efficiently. the photogrammetric measuring of those attributes was performed using photomodeler® software, in which approximately 60 images were oriented. photomodeler provides the opportunity of orienting a large number of images taken from various angles, thus regaining the lost information of hidden objects that led to the optimal 3d reconstruction of the site. measured objects were divided into three layers in photomodeler; a) the ‘terrain’ layer, which plays the role of a dtm and consists of points of sand and rocks, b) the ‘finds’ layer, which includes all measured attributes of finds and c) the ‘control points’ layer, which includes the control points network. figure 8: measured attributes of the artifacts a dxf file of points of the above layers, followed by images with the ids of all points of interest geoinformatics fce ctu 11, 2013 44 diamanti, e. et al.: geometric documentation of underwater . . . upon (figure 8), was extracted from photomodeler® and then imported in rhinoceros®. the entire 3d reconstruction of the site was implemented finally in rhinoceros® by creating the terrain surface at first and placing, afterwards, each find in its right position (figure 9). a more realistic representation of the site was achieved by assigning texture extracted from images through a suitable rendering procedure (figure 10). the rendering was implemented in autodesk’s 3ds max®. figure 9: 3d wireframe model of the shipwreck figure 10: 3d rendered model of the shipwreck, autodesk 3ds max geoinformatics fce ctu 11, 2013 45 diamanti, e. et al.: geometric documentation of underwater . . . 4. conclusions estimating the accuracy of any underwater surveying technique is notoriously difficult [12]. the results of recording the shipwreck of the south euboean gulf represent an attempt for the best possible achievable accuracy, when combining conventional tape measurements with modern and user-friendly photogrammetric software. given the nature of the control points (some robust, others more fragile), it is likely that the points themselves had an uncertainty in their position, which affected the accuracy of the photogrammetric bundle adjustment. therefore, the task of obtaining a robust control points’ network seems as challenging as the task of using photogrammetry for underwater surveying without establishing a control points’ network at all. the second task requires a perfectly calibrated camera and a way to restore the scale during the bundle adjustment of a block of images. evaluating, finally, the work that has been done underwater, given the aforementioned available diving time, it seems that it should be preferable to consume more diving time on calibrating cameras under several conditions underwater, in order to obtain an optimal interior orientation and avoid to spend time on the really time consuming task of measuring distances with common tapes. moreover, the implementation of different software for one final goal may be over consuming too, in terms of time and work, but in this case, it was unavoidable to use several pieces of photogrammetric software. each one was used as a different tool. photomodeler, provides, firstly, the opportunity of a user-friendly automated camera calibration module and secondly, the opportunity of orienting a large amount of images taken from different angles, so that more information of the object could be obtained. on the other hand, imagestation software was chosen as a more reliable way, comparing to photomodeler®, to create a dsm and an orthophoto, thanks to the dsm collection through stereo vision. in conclusion, the application of photogrammetry in terms of generating accurate and radiometrically efficient orthophotomosaics and 3d rendered models, combining inescapable traditional surveying techniques with contemporary digital software support, has proven to be a unique way to achieve a virtual exploration, a “digital trip”, to an ancient shipwreck in deep waters, to a deep ancient cultural heritage. references [1] gili telem, sagi filin, photogrammetric modeling of underwater environments, isprs journal of photogrammetry and remote sensing, 2010. [2] scarlatos d., agapiou a., rova m., photogrammetric support on an underwater archaeological excavation site: the mazotos shipwreck case, cyprus, 2010. [3] canciani, m., gambogi, p., romano, g., cannata, g., and drap, p., 2002, low cost digital photogrammetry for underwater archaeological site survey and artefact insertion. the case study of the dolia wreck in secche della meloria, livorno, italia, international archives of photogrammetry, remote sensing and spatial information sciences 34.5/w12, 95–100. [4] holt, p., 2003, an assessment of quality in underwater archaeological surveys using tape measurements, ijna 32.2, 246–51. geoinformatics fce ctu 11, 2013 46 diamanti, e. et al.: geometric documentation of underwater . . . [5] patias, p., 2006, cultural heritage documentation. international summer school ‘digital recording and 3d modeling’, aghios nikolaos, crete, greece, 24–29 april, www.photogrammetry.ethz.ch/summerschool/pdf/15_2_patias_chd.pdf, last updated 17 april 2006, accessed 27 july 2009. [6] drap p., durand a., provin r., long l., integration of multi-source spatial information and xml information system in underwater archaeology, torino, 2005. [7] demesticha, s., the 4th-century-bc mazotos shipwreck, cyprus: a preliminary report, the international journal of nautical archaeology, in press, 2010. [8] maas h., new developments in multimedia photogrammetry, institute of geodesy and photogrammetry, swiss federal institute of technology, zurich, 2000. [9] lavest j.m., rivers g., and lapreste j.t., dry camera calibration for underwater applications, machine vision and applications 13, pp. 245-253, 2003. [10] walford a., personal communication eos systems inc, vancouver, 1996. [11] li r., li h., zou w., smith r.g., and curran t.a., quantitative photogrammetric analysis of digital underwater video imagery, ieee journal of oceanic engineering, 22(2) : 364-375, 1997. [12] green j., matthews s., turanli t., underwater archaeological surveying using photomodeler, virtualmapper: different applications for different problems, the nautical archaeology society, 2002. geoinformatics fce ctu 11, 2013 47 geoinformatics fce ctu 11, 2013 48 ________________________________________________________________________________ geoinformatics ctu fce 2011 62 image based modeling from spherical photogrammetry and structure for motion. the case of the treasury, nabatean architecture in petra e. d’annibale d.a.r.d.u.s., engineering faculty, università politecnica delle marche, ancona, italy e.dannibale @univpm.it keywords: spherical photogrammetry, structure for motion, image based modeling, cultural heritage 3d documentation abstract: this research deals with an efficient and low cost methodology to obtain a metric and photorealstic survey of a complex architecture. photomodeling is an already tested interactive approach to produce a detailed and quick 3d model reconstruction. photomodeling goes along with the creation of a rough surface over which oriented images can be back-projected in real time. lastly the model can be enhanced checking the coincidence between the surface and the projected texture. the challenge of this research is to combine the advantages of two technologies already set up and used in many projects: spherical photogrammetry (fangi, 2007,2008,2009,2010) and structure for motion (photosynth web service and bundler + cmvs2 + pmvs2). the input images are taken from the same points of view to form the set of panoramic photos paying attention to use well-suited projections: equirectangular for spherical photogrammetry and rectilinear for photosynth web service. the performance of the spherical photogrammetry is already known in terms of its metric accuracy and acquisition quickness but time is required in the restitution step because of the manual homologous point recognition from different panoramas. in photosynth instead the restitution is quick and automated: the provided point clouds are useful benchmarks to start with the model reconstruction even if lacking in details and scale. the proposed workflow needs of ad-hoc tools to capture high resolution rectilinear panoramic images and visualize photosynth point clouds and orientation camera parameters. all of them are developed in vvvv programming environment. 3dstudio max environment is then chosen because of its performance in terms of interactive modeling, uv mapping parameters handling and real time visualization of projected texture on the model surface. experimental results show how is possible to obtain a 3d photorealistic model using the scale of the spherical photogrammetry restitution to orient web provided point clouds. moreover the proposed research highlights how is possible to speed up the model reconstruction without losing metric and photometric accuracy. in the same time, using the same panorama dataset, it picks out a useful chance to compare the orientations coming from the two mentioned technologies (spherical photogrammetry and structure for motion). 1. introduction this research tries to improve an already tested approach for the metric and photorealistic reconstruction of a complex architecture: the interactive image based modeling [1]. photomodeling it is possible to produce a detailed and quick 3d surface reconstruction thanks to real time texture projection on model surfaces. the challenge here is to combine the advantages of two technologies already set up and used in many projects: spherical photogrammetry and structure for motion (sfm) (figure 1). the main difference with previous experiences [2] is the type of planar photo used as input for the sfm: here the images are shots acquired by a developed vr tool, saving a high resolution frame of the entire subject. key points of the proposed research are: low-cost instrumentation, single operator, single pc, measure accuracy, procedure speed, real time control of the result, high resolution photo-realistic textures, and shareable output. all the working phases are tested and optimized to work quickly with an ordinary pc. 2. instrument and photo acquisition the input data for spherical photogrammetry are panoramic images with equirectangular projection, obtained by stitching different photos, taken from the same point and rotating around a pivot (figure 2). it allows to work with high resolution images and have a large (complete if it‟s necessary) photographic scene information. the acquisition instruments are only a reflex camera (canon 60d, 18mpx, 50mm zoom lens-35mm.eq), a long lens monopod bracket and a tripod. the head is adapted to have the camera nodal point in the center of two rotation axis and so to create a panoramic head. this system guarantees a good stability, important when it‟s necessary to work with long exposure ________________________________________________________________________________ geoinformatics ctu fce 2011 63 time because of poor illumination. a sharp image is decisive carrying out a sfm procedure: a bad acquisition can in fact give no results. figure 1: more photogrammetric survey information in a single 3d environment a) panoramic head b) some photos with the same nodal point. stitching software enables to give spherical panorama c) cropping the first panorama (l=77000 px) d) data sheet to store information and convert pixel values in angle and uv coordinates e) set of panorama (equirectangular projection). average resolution 10000x10000px figure 2: panoramic photo acquisition because of hardware limitations some images are resized into 10000x10000 px resolution. software and hardware restrictions are known problems in this research field; on the other hand they drive to optimize the entire survey process. 3. spherical photogrammetry survey the spherical photogrammetry [3,4,5,6,7,8] is particularly suitable for architecture and archaeology metric recording and characterized by a proved precision. the sp.ph., making use of low-cost instrumentation and few steps allows to ________________________________________________________________________________ geoinformatics ctu fce 2011 64 orient the acquired panorama and obtain some shape primitives (.dxf format) (figure3). orientation and restitution procedures are manual: homologous points are collected by the user one by one. a) create points.exe manual identification of homologous points b) sphera.exe panorama orientation tool by prof.fangi d) geometric output (dxf) restitution lines in red, acquisition stations in green, collimation rays in gray c) panorama orientation info figure 3: spherical photogrammetry process the 3d model scale is given trough a direct measure while the reference system origin and the model orientation are fixed by the user. in this way sp.ph., gives the chance to have an accurate orientation of the images while the following 3d point restitution remains time-consuming. in this manual restitution lies the main problem for complex architecture. the reliability of the sp.ph. approach allows to use it as reference system to orient and scale the following sfm 3d models. 4. integration methodology complementing the 2 techniques it is possible to combine a large number of geometrical 3d information (from sfm) with the sp.ph. precision. the sfm [9,10,11,12] approach allows to recover good 3d models from scattered photos but not directly suitable for photogrammetric use: we have no information about scale and survey precision. there are different available sfm tools. they carry out a full automatic 3d reconstruction of subjects visible in more images by means of automated operations: image matching, camera calibration and dense point cloud creation. in this experience are tested two different sfm based tools: photosynth web service and sfm toolkit (bundler+cmvs2+pmvs2). they are investigated by the comparison between their point cloud results and the lines or shapes coming from sp.ph. (figure 4). a good way to integrate different survey techniques and have a clear comparison is to visualize all the results in a unique 3d digital environment. one research goal is to test the approach flexibility in integrating data from different survey techniques, laser scan data and direct survey measures when available. these in fact could be useful hints for a reliable precision comparison. a) photosynth + sp.ph. b) bundler + pmvs2 +sp.ph. figure 4: photogrammetric data visualization and management in 3d graphic software ________________________________________________________________________________ geoinformatics ctu fce 2011 65 to make the comparison possible (sfm versus ph.sp.) it is chosen to work with the same nodal point: high resolution planar images of the entire subject are acquired in laboratory from the same panorama used as input for the sp.ph. process. the idea of obtaining quickly these planar projections come during a virtual navigation with experimental tools. 5. vr tools to capture high resolution planar photos from equirectangular panorama a vr tool [13] is created to have an interactive navigation of the high resolution spherical panorama. the software is scalable and flexible, making possible to develop a large number of functions. it this case were developed plug-ins to automate the uv spherical mapping and save the planar projection visualized along (figure 5). in the same time other information about the virtual camera are stored, first of all the fov (figure 6). the user can navigate and choose the frame to save in a common image file format with a very high resolution photo and 64mpx. the powerful render engine enables to visualize a real time render of 8096x8096 px max resolution. this procedure minimize the image distortion according to two main reasons: the stitching software produces undistorted images and the vr tool adds no perspective distortion. this is confirmed by the only two camera parameters (radial distortion: k1, k2) coming out from one of the two sfm processes used (fig. 8-a, 8-b). a) vr gui: after the panorama and frame are chosen with a mouse click is possible to save a very high resolution shot b) the nodal point of the (virtual) pinhole camera is in the textured sphere centre figure 5: high resolution photo acquisition from spherical panorama 6. structure for motion 6.1 photosynth photosynth is the first software used to carry out the research. it is a user friendly web service to automatically orient scattered photo making use of few on line steps. its web interface allows the user to navigate a virtual scene where his oriented photos are combined with the produced point cloud. lastly camera calibration data and tie points are exported with additional software (synthexport). different tests are fulfilled, but only two of them are reported hereafter (figure 7). first, 10 photos (4288x2848px) are acquired using a reflex camera (nikon d90, 12 mpx). then, 10 planar projections are taken with the developed vr tool and then only resized to 4096x4096px resolution to give the 2 tests similar starting input. no additional information are required by photosynth. it is interesting to focus on the parameters of radial distortion stored in the exported calibration data (figure 8): one order of magnitude differentiates common photos from the vr shots. ________________________________________________________________________________ geoinformatics ctu fce 2011 66 a) fov of shots stored in a txt file b) set of planar projections. images resolution: 8096x8096px figure 6: high resolution images and their information a) first photo set by digital camera c) synthexport permits to direct download point cloud and camera orientation d) first output: 16148 points b) second image set by vr tool e) second output: 9233 points figure 7: experimental tests: from the photo set acquisition to the point clouds a) camera images distortion parameter b) vr images distortion parameter figure 8: parameters of radial distortion from the exported calibration data ________________________________________________________________________________ geoinformatics ctu fce 2011 67 an ad-hoc tool (vedo.exe, fangi and d‟annibale) is developed to convert only the point cloud coming from the second test (vr shots) from .ply to .dxf format. the model is then scaled and rototraslated according to the same reference system used in the restitution from panorama. experimental results are visualized in the same working environment to underline how the sfm result (test with a second image set) differs in terms of orientation from the restitution coming from the sp.ph. figure 9 shows this comparison: sp.ph. results on the left (restitution lines in red and panorama stations in green) and photosynth outputs on the right (oriented point clouds in black and image stations in blue). figure 9: photosynth versus sp.ph. survey 6.2 sfm toolkit (budler+cmvs2+pmvs2) different open-source toolkit are available on web to exploit fully automatic 3d image based reconstruction. henri astre, r&d computer vision engineer, provides an sfm toolkit, available to be downloaded and simply used as stand-alone application on windows platform. toolkit contains different tools: bundlerfocalextractor: to extract ccd width from exif using xml database bundlermatcher: to extract and match feature using siftgpu bundler, created by noah snavely cmvs, clustering views for multi-view stereo created by yasutaka furukawa pmvs2, patch-based multi-view stereo software created by yasutaka furukawa. as done previously with photosynth, the same 2 photo set are tested: the first from digital camera and second from vr tool. vr shots don‟t have the exif information used to extract ccd width, but it is possible to pair them with their previously stored fov info (fig.6-a). starting from an existing model it is possible to modify image exif information by inserting the relative image focal length values (35mm eq.) (figure 10-b, 10-d). a) vr images set and virtual camera info c) xml data base updated http://grail.cs.washington.edu/software/cmvs ________________________________________________________________________________ geoinformatics ctu fce 2011 68 b) gui for exif editing d)focal length value (exif file) figure 10: exif info association and xml database update it‟s necessary to update .xml data base adding a new camera and its hypothetical sensor dimension width.(figure 10c). problems with hardware engine and time-consuming elaborations can be overcome by resizing the images: a max resolution of 5120x5120px is processed for all the 9 images.ply files store the restitutions with the rgb information associated with the point clouds (figure 11). a) first photo set by digital camera res: 4288x2848px b) second image set by vr tool res:5120x5120px c) first point cloud: 639.728 vertices d) second point cloud: 1.089.854 vertices figure 11: tests with their different results experimental results underline the performance of the proposed approach: the extracted point clouds are characterized by a good level of coverage, detail and noise reduction. not so far from the feeling to have to deal with a laser scanning. to test the experimental results by the comparison with the sp.ph., again the second photo set is chosen: it shares the projection center with the oriented panoramas (figure 12). meshlab environment is chosen to visualize and process the point cloud, because of its advanced mesh processing system. interesting is the chance to align meshes taking into account variation in scale: it is necessary for point clouds not collected directly by a 3d scanner device. ________________________________________________________________________________ geoinformatics ctu fce 2011 69 figure 12: dense point clouds by vr and sfm toolkits combination 7. point cloud orientation and mesh optimization all the restitutions are oriented according to the reference system used in the ph.sp. (figure 13). the mesh is made up of 1500000 triangles, too many to be handle in the graphic environment used to draw surfaces. therefore the mesh is optimized trying not to lose important information about the subject geometry (figure 14). a) point cloud orientation according to some points restituited by using the sp.ph. method figure 13: point cloud orientation a) high poly model: 970333 vertices, 1552226 faces b) optimized model: 44959 vertices, 98662 faces c) exported low poly mesh figure 14: mesh optimization ________________________________________________________________________________ geoinformatics ctu fce 2011 70 visualizing the results in a unique reference system it is possible to add visual control and underline differences between orientation camera parameters (figure 15). the mesh from pmvs2 and the lines restitutied by using the sp.ph. approach are very similar: it‟s difficult to highlight differences without other kind of control. figure 15: sfm and sp. ph. geometrical survey in the same 3d environment 8. texture mapping and image based modeling high resolution texture is a valid alternative to heavy 3d geometric details, wasting hardware resources. texture mapping is already known [2 ] to be possible thanks to the panorama orientation and uv mapping parameters. because the center and the orientation of spherical projection are note, a datasheet can be created to automate the uv mapping tiling and the offset computation (figure 16-a). the scene is so ready to perform spherical projections over any surface (figure 16-b). in particular the texture projection on the sfm mesh reveals a good fitting: it validates the experimental procedure and shows the possibility to use the extracted mesh as good starting rough model to redraw 3d surfaces. a) data sheet for uv map conversion parameters b) spheres with correct texture projection mapping figure 16: orientation and uv spherical mapping the mesh is so imported in the graphic environment to be a useful help to speed up the interactive image based modeling, devised to draw geometrical elements of architecture with the photogrammetric control. the imported mesh can now be enhanced under the review of the photogrammetric geometric data and texture mapped on the surface. real time rendering allows visualizing a photorealistic texture during the modeling phase (figure 17). this is a useful help to control the drawn geometries. ________________________________________________________________________________ geoinformatics ctu fce 2011 71 a) pmvs2 mesh, sp.ph. lines and oriented panorama b) photorealistich texture with spherical mapping c) pmvs2 mesh with texture and lines from sph.ph figure 17: texture projection and mesh control 8. conclusion experimental results show how is possible to speed up and geometrically check the image based modeling to obtain a 3d photorealistic model. first, it is possible to take advantage of vr tools to acquire high resolution shots and use them to run automatic reconstructions by means of sfm techniques (photosynth web service and bundler + cmvs2 + pmvs2). resulting point clouds have good performance in terms of coverage and accuracy and can be oriented in accordance with the spherical photogrammetry restitution. then, with the aim of building a 3d digital model, the mesh can be optimized, resized and imported in a graphic environment. here it becomes a good rough 3d reference to aid the modeling phase. image based modeling (figure 18, 19) can in this way take advantage of three different benchmarks: restitutied data from spherical photogrammetry optimized mesh from bundler+pmvs2 high resolution texture projected on surface. step by step, the combination of these three references drives the modeler towards a photorealistic description of a complex architecture, without losing in quickness and hardware performance. ________________________________________________________________________________ geoinformatics ctu fce 2011 72 figure 18: image base modeling of some elements in each working phase, thanks to innovative interactive systems, the outputs allow efficient real time visualizations of the complex architectural model. the performance of the proposed photogrammetic tools favors‟ testing and investigation. lastly, using the same panorama dataset, the proposed research picks out a useful chance to make comparison between spherical photogrammetry and structure for motion in terms of orientations and accuracy. moreover the research underlines how this combination allows to speed up the interactive image based modeling, taking advantage of a correct image orientation (spherical photogrammetry) and a large extracted point cloud as geometric reference. figure 19: low poly model with high resolution texture ________________________________________________________________________________ geoinformatics ctu fce 2011 73 9. references [1] e.d‟annibale, g.fangi (200ř) – interactive modelling by projection of oriented spherical panorama – 3d-arc‟200ř, 3d virtual reconstruction and visualisation of complex architectures – trento 25-29 february 2009isprs archives vol xxxviii-5/w1 : 1682-1777 on cd. [2] e.d‟annibale, s.massa, g.fangi (2010) photomodeling and point clouds from spherical panorama nabatean architecture in petra, jordan c.i.p.a. workshop, petra 4-8 november 2010. [3] g.fangi (2006) investigation on the suitability of the spherical panoramas by realviz stitcher for metric purposes, isprs archive vol.xxxvi part 5, dresden 25-27 september 2006. [4] g.fangiuna nuova fotogrammetria architettonica con i panor ami sferici multimmagine – sifet symposium, arezzo 27-29 giugno 2007, cd. [5] g.fangi – the multi-image spherical panoramas as a tool for architectural survey xxi international cipa symposium, 1-6 october 2007, atene, isprs international archive – vol. xxxvi-5/c53 – issn 1682-1750 – cipa archives vol. xxi – 2007, issn 0256-1840 pg.311-316. [6] g.fangi – la fotogrammetria sferica dei mosaici di scena per il rilievo architettonico – bollettino sifet n. 3 2007 pg. 23-42. [7] g.fangi, p.clini, f.fiori – simple and quick digital technique for the safeguard of cultural heritage. the rustem pasha mosque in istanbul – dmach 4 2008 digital media and its applications in cultural heritage 5 6 november, 2008, amman pp 209-217. [8] g.fangi (2009) – further developments of the spherical photogrammetry for cultural heritage – xxii cipa symposium, kyoto, 11-15 ottobre 2009. [9] richard hartley and andrew zisserman (2003).multiple view geometry in computer vision, . cambridge university press. isbn 0-521-54051-8. [10] noah snavely, steven m. seitz, richard szeliski (2006). photo tourism: exploring image collections in 3d.acm transactions on graphics (proceedings of siggraph 2006). [11] noah snavely, steven m. seitz, richard szeliski, (2007). modeling the world from internet photo collections. international journal of computer vision. [12] l.barazzetti, g.fangi, f.remondino, m.scaioni, (2010) automation in multi-image spherical photogrammetry for 3d architectural reconstructions the 11th international symposium on virtual reality, archaeology and cultural heritage vast (2010) a. artusi, m. joly-parvex, g. lucet, a. ribes, and d. pitzalis (editors). [13] e.d‟annibale (2010) – new vr system for navigation and documentation of cultural heritage – c.i.p.a. workshop, petra 4-8 november 2010. meshlab http://meshlab.sourceforge.net/ photosynth http://www.photosynth.net synthexport http://synthexport.codeplex.com/ sfm toolkit http://www.visual-experiments.com/demos/sfmtoolkit/ sift gpu http://www.cs.unc.edu/~ccwu/siftgpu/ bundler http://phototour.cs.washington.edu/bundler/ cmvs2 http://grail.cs.washington.edu/software/cmvs/ pmvs2 http://grail.cs.washington.edu/software/pmvs/ http://en.wikipedia.org/wiki/international_standard_book_number http://en.wikipedia.org/wiki/special:booksources/0-521-54051-8 http://phototour.cs.washington.edu/photo_tourism.pdf http://phototour.cs.washington.edu/modelingtheworld_ijcv07.pdf http://meshlab.sourceforge.net/ http://www.photosynth.net/ http://synthexport.codeplex.com/ http://www.visual-experiments.com/demos/sfmtoolkit/ http://www.cs.unc.edu/~ccwu/siftgpu/ http://phototour.cs.washington.edu/bundler/ http://grail.cs.washington.edu/software/cmvs/ http://grail.cs.washington.edu/software/pmvs/ potential influence of e-learning and open source solutions for education at palacký university in olomouc inspired by polytechnic university in valencia rostislav nétek department of geoinformatics, palacký university in olomouc, czech republic, rostislav.netek@upol.cz abstract this paper assesses different approaches in education between western and eastern europe. it is based on a case study, comparing which compares universityies conventions inbetween spain and czech republic, focusinged on e-learning and open source softwareources. the eeducation system at polytechnic university in valencia (upv) puts much more emphasis on open source solutions and elearning compared to the situation in czech republic. gvsig is an open source geographic information system (gis), co-developed at upv as ain collaboration of commercial companies with research institutions. lecturers at upv significantly integrate free and open source (foss) into classes as well as they participate in open source communities like planet gvsig, gvsig outreach or association for the promotion of foss4g and the development of gvsig. in fact, a complex system for self-education in the field of gis has been developed there. it combines positive relationship to open source solutions and takes advantage from all sections together. the “aula virtual” project is a virtual training classroom using the virtual educational platform. great emphasis is put on step-by-step video tutorials. 300 videos are uploaded on politube channel, which contain both video and audio and “time-stamps-links”. time-stamp allows the possibility to switch among videos and other sources (annotation, webpages, etc.) dependent on the student’s individual requirements. compared to the situation in the czech republic, where proprietary software is still preferred in the academic sphere, this topic brings place for discussion. this paper discusses two different points of views, benefits of both of them and proposes a solution with regard to the specifics of czech university education. keywords: e-learning, gvsig 1. introduction in the last few years, free software became an alternative solution to proprietary software. the open source software is a phenomenon in all fields of information technology, moreover it is one of the most discussed topics, with significant impact in the field of geoinformatics including both education and private sector, as well. solutions based on open source are used every day nowadays for research, development and education, as well. the education of gis is associated with the usage of e-learning platforms, which brings benefits for academic geoinformatics fce ctu 8, 2012 79 nétek, r.:potential influence of e-learning and open source solutions environments. a lot of ways for taking advantage of e-learning and for improving the teaching process, because solutions like video tutorials improve the effectiveness of learning. as [6] said “many gis teachers select proprietary gis software for education because students can learn the mainstream software skills and have advantages in the job markets”. on the other hand, some gis teachers prefer to use open source software because it is free of cost and allows the freedom to modify and distribute gis applications. 2. motivation this paper summarizes different approaches in education between two universities in western and eastern europe. it is based on a case study, comparing university conventions in spain and czech republic. the author, originally from czech republic, spent two long-term internships at polytechnic university in valencia, where he met gvsig for the first time and passed courses about it. he was surprised by the fact that in the western europe open source software was deeply implemented into education at universities compared to the situation in many universities in the czech republic. 3. gvsig gvsig is an open source geographic information system (gis). the development with the aim to replace proprietary gis software is funded by the regional government of valencia (spain). the abbreviation gvsig means generalitat valenciana sistema d'informació geogràfica. government of valencia community wrote down in 2003 some requirements for development new gis application: portability, modularity, open-source license, interoperability, support of several standards. the development has been realized by a compact group of private companies and research institutions including universities, so a number of scientific extensions for gis analysis are currently being built. it runs on windows, linux, and mac os x operating systems under the gnu general public license (gpl), which allows free use, distribution, study and improvement. it supports many ogc/iso standards for geospatial data and due to modularity it should be extensible by additional extensions. currently gvsig is available in 13 languages [2], [11]. in general gvsig is a complex platform, besides desktop basement there are other components. according to [11] “gvsig desktop is a powerful gis solution designed to offer a free solution to all needs related to geographic information management. it is characterized as a comprehensive, easy to use solution, adapted to the user´s needs”. according to [12] gvsig mobile is characterized as “a project aimed to develop a free and open source gis/sdi client on mobile devices. gvsig mobile is a version of gvsig desktop adapted for mobile devices, with support for shapefiles, gpx, kml, gml, ecw, wms and images, capable of using gps. moreover, there is an extension for gvsig desktop, which enables to export geographical data from gvsig desktop to gvsig mobile”. moreover, gvsig mini is available for a mobile phones and provides the possibility to display map service atntiles, as openstreetmap, yahoo maps, microsoft bing and others on the platform of smartphones and mobile devices. itis available as gvsig mini for android and gvsig mini for java [11]. according to [8] qgis and gvsig are widely widespread for education reasons on european universities (see figure 1). according to [3] research testing gvsig and qgis appear to be relatively more suitable than the others. the disadvantage of gvsig, is that the starting geoinformatics fce ctu 8, 2012 80 nétek, r.:potential influence of e-learning and open source solutions up time is longer than others, but in fact, time about 10 seconds are adequate. moreover, according to [3] testing gvsig offered balanced performance in efficiencies and functionalities at an acceptable level and was recommended for their research. according to another research made by [5] gvsig was the second best gis software with 106 functions in total, while the first position occupies commercial product arcgis. figure 1: distribution of main open source software; source: vázquez 2011. 4. approach at polytechnic university in valencia, spain 4.1. attitude to open source there is an elegant example, how spain education reacts on the typical argument about open source. some years ago, they write down two stereotypes about free software: 1) it is not quality software and 2) companies do not support free software [1]. these stereotypes were clarified and explained into the positive meaning, due to progressive approach in the education system . currently spanish lecturers argue with these two answers: 1) “good software and bad software can be found in free platform as well as in commercial products.the advantage is that in the case of free software the quality can be detected and adjusted according to the user’s needs, due to its open and free nature.” 2) “it is a lie – there are many companies supported free software but with a different type of business model. it is based on offering professional services where all of the investments are allocated towards generating wealth; not in a model where selling the product is the main component of the business thereby converting part of investment into an expense” [1]. opposed to other countries in europe, this approach is fully accepted for spain education in the field of geoinformatics. it means that free and commercial software has the same importance and none of them is preferred significantly due higher gi education (typically bachelor study), when students gain a general knowledge about gis solutions. in fact, the spanish argument is that both commercial and free software stay on the same start-line, no one is preferred, because it is extremely important to be familiarize with both kinds of way during education period. geoinformatics fce ctu 8, 2012 81 nétek, r.:potential influence of e-learning and open source solutions 4.2. gvsig project at upv according to real experiences, education at upv shows typical approach how implement open source solutions into higher education successfully. atnacademic teachers at upv have enormous enthusiastic for gvsig project, because upv collaborates on gvsig development by research group of geoenvironmental mapping atnand remote sensing (cgatatn). the reason is clear, due to many connections to valencia district,atn atnthe upv is just in the centre of the gvsig development since the beginning. on the other side, upv participate on the project by the course which is focused on gvsig, some research projects with focus on gvsig were made there, educational web blog about gvsig is written by the lecturer; the whole package of video tutorials was made for students, an international gvsig conference takes place in valencia every year and many more. the main focus of activities is put on the gvsig course. strong emphasis is put on elearning and self-education beside common teaching, there is a full semester project based on collaborative team working, when only task is done. usually the task is following from real necessities, real data are used, and outputs are sometimes used in practice by companies (specific logistic problem, evidence of children parks, etc.). no demo data or theoretical silly tasks. the process of working and results are fully under the students’ direction, which is quite innovative method compared to the previous situation at palacký university, czech republic. just this approach was the inspiration for implementing similar lesson at palacký university. in summary, the positive attitude and enthusiastic from teachers is the cornerstone for implementing open source into the education. 4.3. aula virtual & video tutorials for every need students can use excellent source – complex interactive tutorial. it was made by cgat especially for upv student courses and it is divided into four parts. the first one is a general introduction into gis; the second one is focused on the geospatial data and theirsources in spain (e.g. idee, inspire). the core part is the complete manual it describes all ways how to work with gvsig programme: the installation, the interface, all functions and extensions as well as specialized case [7]. the teaching is quite intuitive, becauese this part is described in every detail, every step is added by images and in every chapter examples and some exercises are write-down. the last part is the complete video tutorial where each of discussed step and examples are recorded. this is the best way for e-learning. in this type of study, there is no space for constrains or student questions. moreover, this complex tutorial is supplemented by another source written by the lecturer. some exercises and questions on gvsig are discussed on education web blog [10]. in comparison to tutorial, this source describes details and new features related to university education. the “aula virtual” project is a virtual training classroom, which takes place in gvsig under the agreement between the department of infrastructure and transport of generalitat valencia and the polytechnic university of madrid. the aim of the project is to develop and implement virtual classroom for gvsig training programme, using the virtual educational platform called moodle. the results of this project are available for users all over the world with a focus on gvsig teaching and learning through e-learning process. this component involves a series of activities, covering intermediate and advanced level [4]. the courses include a number of activities like practices, exercises or quizzes based on interactive commugeoinformatics fce ctu 8, 2012 82 nétek, r.:potential influence of e-learning and open source solutions figure 2: video tutorial as part of complex interactive education source made by cgat. nication, including hypertext, images, videos and audio files. it means that conventional teaching approach is fully supported by innovative e-learning methods, like video files or virtual classroom. 4.4. spanish spatial data infrastructure & web services finally, another advantage for spanish education in the field of geoinformatics, is the fact that all data produced in area of spain are available on spanish spatial data infrastructure (idee). the main goal is to integrate through the internet all data, metadata, services and geographic information in spain, within the state, regional and local government level. idee provides more than 1000 wms and 200 wfs sources. in fact, idee is considered as complete data source by common users, it is fully supported by all spain society and the system really works. due to this fact web services are widely used by users, including universities [13]. 5. approach at palacký university in olomouc, czech republic compared to the situation in the western europe, there is quite different point of view at palacký university. according to the current syllabus [9] at department of geoinformatics during bachelor study students meet argis software in eight courses, while there is only one free software course. it describes unbalance of propagation free software, despite the advantages. typically commercial software arcgis by esri is preferred more than others for gis education at all czech universities. on the other side, nowadays there is increasing number of teachers and new courses focused on open source and e-learning systems are used widely. find the reasons is a hard question, probably it is caused by quite different habits, behaviour and history. generally said, the education training is based more on theoretical point of view and individual approach, compared to collaborate working in the spain. on the geoinformatics fce ctu 8, 2012 83 nétek, r.:potential influence of e-learning and open source solutions other side, according to real experiences from students’ internships, czech students sometimes have better disposition for the beginning working in the companies after they graduate. 5.1. open source course inspired by the situation at the upv, there is an effort widely implement open source into education at the department of geoinformatics, upol. as a pilot case was chosen course “programmatic tools of gis 2” in the summer semester 2012. the course visited 31 students in total; they were divided in groups of 4-5 persons. the course introduces general requirements of the open source approach and softwares such a saga gis, qgis and grass are mentioned. most of the time and interest is given on gvsig project and it is divided into two separate parts. during lessons every week the basic steps are shown practically in the classroom, for every lesson new homework is assigned. based on the fact, that most of the homeworks or projects were solved individually, this course is oriented for collaborate process, students work together in teams. moreover, the strong emphasis is put on self education in groups through digital sources, which is quite innovative method there. the task and sources are done, in fact, the whole process of searching for the materials, working and result presentations is fully under the student’s direction. the teacher checks all steps, he gives advice and answer to the question, but the students are responsible for the management of problem solving. for group exercises focused on gvsig, complex structure of education material was summarized, which contains two main parts: video tutorials made by cgat (section 5.2) and list of e-learning sources focused on gvsig (section 5.3). at the end of course voluntary evaluation was made by questionnaire. the students describe their feelings about collaborative working there. generally, the course brings benefits of independent and project management experiences, but lots of them criticize number of team members. five people for each team is quite enough for collaborate working on gis project made on one laptop. 5.2. video tutorials for self education was chosen complex system made by cgat as a primary source (see section 4.3). it describes working with gvsig programme step by step, divided into four levels. first level contains only plain text about the task a solution, in the second one is captured screen supplemented by labels, which is an appropriate solution in many similar sources. in the next level “static” tutorial is replaced by video tutorial with audio description. more than 300 videos are uploaded on both youtube channel and politube channel. moreover videos contain “timestamps” connected with annotation and “links” provide connection with timestamp into another video. in fact, all videos make a network structure and there is possibility to switch among videos dependent on user’s needs. the last level contains links to another source like tutorials, manuals, on-line courses, forums or web blogs of gvsig community (see section 5.3). this complex system brings benefits that students can learn any step anytime, anywhere and no matter how often they repeat it. 5.3. gvsig community generally, free software has one or two official web pages with some forum where users’ questions are answered. the situation about gvsig project is quite different. spanish and english speaking users can find a huge number of online sources, like official pages about geoinformatics fce ctu 8, 2012 84 nétek, r.:potential influence of e-learning and open source solutions figure 3: network structure of video tutorials by cgat; timestamps and links provide connections between videos; source: vázquez 2011. gvsig project, forums and blogs as well as unofficial fan pages. on the other side, there is a disproportion between digital sources and published paper books. gvsig is a pure example of open source programme developed by collaborate process by people which are seriously interested in. their aim is to develop powerful software, which is obtained with communication. this communication takes place on the internet only, because of fast feedback compared to conventional process. in fact, at the same date, when the book is published, the book is not actual. the rapid evolution in the digital media brings changes every day. that is the reason, why gvsig is supported by many web sources first of all (similarly to another open source projects). following list shows some selected examples of digital sources focused on gvsig, which are used by students for education at upol. http://www.gvsig.gva.es/ the official web page http://www.gvsig.org/ the official international web page http://gvsig.com/ – the official web page of gvsig association http://blog.gvsig.org/ the web blog of gvsig project team http://planet.gvsig.org/ the community of bloggers http://gvsigce.org/ the gvsig community edition http://outreach.gvsig.org/case-studies another community http://edugvsig.blogspot.com/ the web blog focused on education with gvsig http://gvsigmac.blogspot.com/ the web blog focused on gvsig solution on macintosh platform http://www.gvsig-training.com/ training platform of gvsig association http://gvsig3d.blogspot.com/ the web blog focused on 3d visualization by gvsig http://gvsig-argentina.blogspot.com/ the web blog focused on usage gvsig in south america http://gvsigrussia.wordpress.com/ the web blog focused on usage gvsig in russia geoinformatics fce ctu 8, 2012 85 http://www.gvsig.gva.es/ http://www.gvsig.org/ http://gvsig.com/ http://blog.gvsig.org/ http://planet.gvsig.org/ http://gvsigce.org/ http://outreach.gvsig.org/case-studies http://edugvsig.blogspot.com/ http://gvsigmac.blogspot.com/ http://www.gvsig-training.com/ http://gvsig3d.blogspot.com/ http://gvsig-argentina.blogspot.com/ http://gvsigrussia.wordpress.com/ nétek, r.:potential influence of e-learning and open source solutions figure 4: youtube channel of gvsig project. https://joinup.ec.europa.eu/software/gvsig-desktop/description – the gvsig overview http://mmedia.uv.es/index?way=visited&f=tc.category_id&w=212 – aula virtual multimedia http://cartolab.udc.es/cartoweb/fonsagua/ the water and sanitation programs on gvsig http://cartolab.udc.es/cartoweb/gvsig-eiel/ the survey of local infrastructure and equipment http://foss4gis.blogspot.com/ another web blog about gvsig http://gvsigconsultoresaa.blogspot.com/ another web blog about gvsig http://jornadas.gvsig.org/ international conference about gvsig http://cgat.webs.upv.es/bigfiles/gvsig/gvsig_112.htm interactive course made by cgat beside the core software, official community was established. the objective of the “association for the promotion of foss4g and the development of gvsig",also called gvsig association, is the sustainability of the gvsig project based on the maintaining of the professional structure and the infrastructures of the gvsig community. the gvsig association operates gvsig training webpage which offers training and learning courses for single users, developers and organizations, as well. a lot of courses, are available for free as well as paid workshops and webinars. planet gvsig is a main gateway of websites made by gvsig community members the gvsig campus community: this group of community emerges as a result of the work done mainly by the universities on projects. this new stage focuses on expanding this working group, promoting the participation of universities and tertiary training institutions in different countries and different areas (e.g. geomatics) having interest to undertake student's work related to gvsig products. the gvsig blog is oriented on practical examples and exercises with programs, solving the errors and bugs. geoinformatics fce ctu 8, 2012 86 https://joinup.ec.europa.eu/software/gvsig-desktop/description http://mmedia.uv.es/index?way=visited&f=tc.category_id&w=212 http://cartolab.udc.es/cartoweb/fonsagua/ http://cartolab.udc.es/cartoweb/gvsig-eiel/ http://foss4gis.blogspot.com/ http://gvsigconsultoresaa.blogspot.com/ http://jornadas.gvsig.org/ http://cgat.webs.upv.es/bigfiles/gvsig/gvsig_112.htm nétek, r.:potential influence of e-learning and open source solutions on the gvsig outreach students can find real case studies where gvsig solution is applied into the practice. last but not least source should be a list of contributors, who write personal blogs or articles about gvsig (figure 5). figure 5: list of contributors on planet gvsig. 6. conclusion the purpose of this paper is to show different approaches in using open source for education between polytechnic university in valencia (spain) and palacký university in olomouc (czech republic) and to give an overview how open source solution should be implemented into the education. although commercial software played dominant role for gis development in the past, the open source solutions has become a stronger influence in the last years. there is no technical reason why commercial software should be preferred so significantly, because each alternative programme (e.g. gvsig) provides the same basic features, functions and analysis as proprietary software. moreover, the open source brings benefits like interoperability, open source code, free cost, etc. the phenomenon of free software is obvious at spanish universities first of all. positive approach to open source by spanish academician should be described by three main characteristics: 1. neither commercial nor free software is preferred or repressed. 2. positive attitude and enthusiastic for foss by academics. 3. e-learning supports conventional teaching process. based on the experiences from polytechnic university in valencia, there is the range of learning possibilities, which should be implemented to support new learning environment. the current syllabus at palacký university demonstrates that arcgis software is used in eight courses, while open source in only one. a pilot course “programmatic tools of gis 2” in the geoinformatics fce ctu 8, 2012 87 nétek, r.:potential influence of e-learning and open source solutions summer semester 2012 has been opened. it has been oriented on open source solutions only, especially on gvsig project. number of 31 students in total have been registered in. for collaborate requirements they were divided in groups of 4-5 persons. the education structure is patterned on self education in students’ teams, in combination of e-learning sources. interactive tutorial based on videos and list of web sources have been prepared for every lesson by the lecturer. this is an innovative process of education there. according to student’s evaluation this structure is more comfortable for them due to possibility repeating sources and co-working anytime and anywhere, but it is crucial to be in the connection with lecturer. strong recommendation is to check students´ steps by lecturer every week and ask them if any assistance is required. each group should apply different steps, due to the fact that learning process is fully managed by students, but lecturer’s role is to keep the steps in the right direction by well-timed tips and helps. individual habits between czech and spain academic system require a different number of team-members. students confirmed that communication in the group of five students is complicated in case of computer-based exercises. about three or four persons should be set as ideal number of members for incident-free collaborate working. as has been emphasised, this case study can provide opposite of courses where only proprietary software is used. using gis open source classes has been proven as highly eligible for bachelor study, because students gain an alternative overview. the main recommendation is to enable students be familiarize themselves with both types of software. no matter which free software is introduced, but general principles and advantages of foss are crucial for students’ overview. the benefit of described course is that some students themselves have expressed interest for open source topic for its bachelor thesis, which was not regular before. general recommendation for the following course is to be focused on one software only, because of similar interface and operating. if free software is used, the sources of free data should be mentioned, as well. from the practical point of view, it is strongly recommended to present law and legal issues of foss as well as the possibility of customization and extension by open source code. the open source course was positively evaluated by all students as well as by the lecturer. based on experiences from the first round, the same course will continue in the summer semester 2012/2013. references [1] anguix a. (2011): 7th international gvsig conference. valencia, 92 p. [2] cropper, s. (2010): the use of gvsig as the primary geographic information system for the analysis of spatial data and the production of maps in a small ecological consulting firm. in: open planet, vol. 4, pp. 20-29. [3] chen, d. et al. (2010): assessment of open source gis software for water resources management in developing countries. in: journal of hydro-environment research, vol.4, pp 253-264. [4] poveda m. a. b., gonzales, m. a. (2010): proyecto: aula virtual gvsig. madrid, 8 p. geoinformatics fce ctu 8, 2012 88 nétek, r.:potential influence of e-learning and open source solutions [5] sillero n., tarroso p. (2010): free gis for herpetologists: free data sources on internet and comparison analysis of proprietary and free/open source software. in: acta herpetologica, vol. 5, pp 63-85. [6] tsou m., smith j. (2011): free and open source software for gis education. san diego, 18 p. [7] urbano, f. et. al. (2010): wildlife tracking data management: a new vision. in: philosophical transaction of the royal society, vol. 365, pp 2177-2185. [8] vázquez, j.p., platero, m.m.n. (2011): ¿cúal es la mejor forma de aprender gvsig? in: 7th international gvsig conference, valencia, 40p. [9] courses bachelor study geoinformatics. online [http://geoinformatics.upol.cz/ epredmety.php] [10] edugvsig. online [http://edugvsig.blogspot.com/]. [11] gvsig desktop. online [http://www.gvsig.com/products/gvsig-desktop]. [12] gvsig mobile. online [http://www.gvsig.com/products/gvsig-mobile?set_ language=en]. [13] idee. online [http://www.idee.es/shownewslist.do?cid=pideep_noticias]. geoinformatics fce ctu 8, 2012 89 http://geoinformatics.upol.cz/epredmety.php http://geoinformatics.upol.cz/epredmety.php http://edugvsig.blogspot.com/ http://www.gvsig.com/products/gvsig-desktop http://www.gvsig.com/products/gvsig-mobile?set_language=en http://www.gvsig.com/products/gvsig-mobile?set_language=en http://www.idee.es/shownewslist.do?cid=pideep_noticias geoinformatics fce ctu 8, 2012 90 testing of the accuracy dependency of prismless distance measurement on the beam incidence angle pavel třasák, martin štroner, václav smítka, rudolf urban department of special geodesy faculty of civil engineering czech technical university in prague pavel.trasak@fsv.cvut.cz keywords: prismless distance measurement, accuracy, testing, incident angle abstract the article assesses the precision development of distance measurement using prismless distance meters, both in relation to the changing length of the measured distance and the changing incidence angle of the distance meter’s beam. the article presents the design and performance of an original experiment, the characteristics of instruments used, statistical evaluation of experimental data and formulation of conclusions on the precision rate of distance measurement using prismless instruments. introduction the problems falling under research plan vz 1 “reliability, optimization and durability of building materials and constructions”, partial task “geodetic monitoring ensuring the reliability of structures”, also involve testing surveying measuring instruments, which is a key part of metrological safeguarding of instruments and aids used in on-site surveying measurements. measurements using prismless distance meters are becoming increasingly more common in surveying practice allowing the measurement of points directly on structures eliminating the need for reflective beacons (reflective foils, corner cube prisms). their principal advantages include time economies and frequently also considerable cost economies in the case of poor accessibility of measured points. the measurement of points situated on rock walls, in mines or on façades of buildings may serve as an example. the precision rate of measurements using prismless distance meters tends to be lower than in measurements applying reflective prisms or foils, which, however, does not pose a problem in many jobs. to utilize this technology with adequate effectiveness, it is advisable to know the characteristics of the measurement. it is a generally known and recognized fact that the precision of measurements made with this type of distance meters falls with the growing deflection from the normal incidence onto the measured surface. the article describes a measuring experiment whose objective is to confirm or disprove the above-mentioned fact about a systematically falling precision of measured distances. due to the fact that different manufacturers of measuring instruments may use different electro-optical systems of determining measured distances, three different instruments commonly applied in surveying practice were used for the experiment – total stations equipped with prismless distance meters and also a terrestrial laser scanner. geoinformatics fce ctu 2011 117 trasak, p. et al: testing of the accuracy dependency of prismless distance measurements on the beam incidence angle figure 1: used instruments instruments and aids used the testing was performed with instruments and measurement aids currently available at the department of special geodesy at faculty of civil engineering, ctu in prague. tested measuring instruments as mentioned above, the total of 4 measuring instruments were used during the measuring experiment (3 total stations, 1 terrestrial laser scanner). for the reason of preserving the maximum objectivity of the experiment, the measuring instruments were selected to cover the most varied characteristics possible. the choice of instruments was conditioned on their diversity in terms of the instrument design (different manufacturers), technological differences (varied age of instruments) and in terms of varied precision rates of implemented prismless distance meters. references to literature containing detailed information on the instruments are displayed in fig. 1, while precision rate characteristics are presented in tab. 1. trimble s6 topcon gpt-7501 topcon gpt-2006 leica hds 3000 σφ 0.3 mgon 0.3 mgon 2.0 mgon 3.8 mgon σz 0.3 mgon 0.3 mgon 2.0 mgon 3.8 mgon σdh 1 mm + 1 ppm 2 mm + 2 ppm 3 mm + 2 ppm σdb 3 mm + 2 ppm 5 mm 10 mm (< 25 m) 4 mm (< 50 m) 3 mm + 2 ppm (> 25 m) table 1: characteristics of selected measuring instruments, where σφ is standard deviation of measured horizontal directions, σz of measured zenithal distances, σdh of distances measured by a prism distance and σdb of distances measured by a prismless distance meter a device simulating a reflective surface the target surface allowing the reflection of a distance meter’s beam of electromagnetic rays used was a wooden board with dimensions of 420 mm (height) × 320 mm (width). the geoinformatics fce ctu 2011 118 trasak, p. et al: testing of the accuracy dependency of prismless distance measurements on the beam incidence angle figure 2: a target wooden board fixed onto the photogrammetric camera carrier and placed on a surveyor’s tripod board was fixed onto the carrier of a terrestrial photogrammetric camera (fig. 2), which, in combination with a surveyor’s tripod, allows precise positioning of the vertical axis of rotation of the target board above the target point (baseline point, see below) and precise turning of the reflective surface. description of the experiment the measuring experiment was planned to test measurements made by prismless distance meters of selected instruments under various incidence angles of the distance meter’s laser beam to the surface of the measured object (wooden reflective board). due to the fact that different behaviour of measuring instruments during the measurement of different distances may be expected, the measurement precision was tested not only in relation to the incidence angle of the distance meter’s beam to the reflective surface, but also in relation to the length of the measured distance. the site selected for the performance of the measuring experiment were the premises of the ground-floor technological storey of the building of the faculty of civil engineering, ctu in prague where all-day unchangeable conditions may be expected affecting the use of a prismless electronic (laser) distance meter (or affecting the transmission of the distance meter’s beam of electromagnetic rays respectively), i.e. all-day constant temperature, atmospheric pressure and air humidity, and, further, a uniform unchangeable light exposure of the whole measured space. in the first phase of the experiment, the baseline was set out with a precision of 1 centimetre, and the points for the positioning of the measured surface were located at distances geoinformatics fce ctu 2011 119 trasak, p. et al: testing of the accuracy dependency of prismless distance measurements on the beam incidence angle of 5 m, 10 m, 20 m, 40 m from the measuring station s (the measuring instrument’s centre). as experimental data processing does not consider working with actual deflections of measured distances, but only with sample standard deviations determined from repetitive measurements, 1-cm precision in setting out the baseline is quite sufficient. both the measuring instrument (this generally applies to all selected instruments) and the reflective board (together with the photogrammetric camera carrier) were fixed onto surveyor’s tripods and located above the baseline points at an approximate height of ca 1.2 m. the turning of the target board for the experiment was selected in an interval of 0 to 90 gon. while gradually turning the surface in 15-gon steps, the total of 7 board turning positions were obtained (0 gon, 15 gon, 30 gon, 45 gon, 60 gon, 75 gon, 90 gon). figure 3: baseline diagram during the whole time of the experiment, the tested instrument was placed at the initial point of the baseline, and the tripod with the reflective target board was gradually centred and levelled over individual baseline points. for each configuration set-up, i.e. for each distance (4 distances in all) and for each turning of the reflective board (7 turnings in all), repetitive measurements of the horizontal distance were performed using a prismless electronic distance meter. specification of the number of repetitive distance measurements the number n of repetitive measurements of the distance was specified based on the consideration of the magnitude of the standard deviation of the sample standard deviation of the measured distance σs [5], which is defined by the relation σs = σ√ 2(n− 1) , (1) where σ is the standard deviation of the distance. based on the previously formulated condition that the standard deviation σs may maximally equal 10% of the magnitude of the standard deviation σ the range of random sampling equals σ 1√ 2(n− 1) = 0.1σ ⇒ n = 51. (2) for the total number of 7 different positions of partial turning of the target board (0 gon, 15 gon, 30 gon, 45 gon, 60 gon, 75 gon, 90 gon) and 51 repetitive measurements, the total of 357 distances were to be measured at each target point of the baseline. for the whole baseline geoinformatics fce ctu 2011 120 trasak, p. et al: testing of the accuracy dependency of prismless distance measurements on the beam incidence angle (4 different target points), the total number of distances measured by one measuring instrument equals 1428 (28 random samples). in the whole experiment considering 4 instruments, the total of 5712 distances are measured. assessment and results of the experiment processing procedure as mentioned above, the objective of the experiment was to assess the precision development of distance measurements made by a prismless distance meter, both in relation to the growing length of the measured distance and the growing turning of the reflective surface. the principal characteristic describing the distance meter’s accuracy considered in this case is the sample standard deviation determined from repetitive measurements of a distance in a certain configuration (at a certain distance and partial turning of the reflective surface), i.e. the sample standard deviation of a random sample with a size of 51 values. this deviation is governed by the relation s = √√√√ 1 n− 1 n∑ i=1 (xi −x)2, (3) where n is the sample size, xi is the i-th measured distance and is the sample mean of measured distances. the final result of data processing within the experiment is the assessment of input random samples (4 × 28 in total), determination of sample standard deviations and assessment of whether their development within a changing configuration is random or whether it is influenced by significant effects participating in the reduced accuracy of distances measured by prismless distance meters. experimental data is processed using the following procedure: 1. testing of outliers in random samples and their potential elimination. 2. verification of normality of random samples. 3. homogeneity assessment of random samples. 4. setting of the confidence interval for the standard deviation of a measured distance. 5. regression analysis, setting of a linear change in the standard deviation of a measured distance. 6. graphic interpretation of accuracy evolution of a measured distance. elimination of outliers in repetitive measurements of distances using an electronic distance meter, potential effects of sporadic gross measurement errors are presumed (caused e.g. by a potential sudden fluctuation of natural conditions in the vicinity of the measured distance). this leads to the occurrence of outlying measurement values in random samples of measured distances. it is advisable to eliminate outliers from the random samples during their processing thus enhancing the objectivity of the results of evaluated data. geoinformatics fce ctu 2011 121 trasak, p. et al: testing of the accuracy dependency of prismless distance measurements on the beam incidence angle based on an assumption that random samples belong to the population with a normal probability distribution and sample sizes are relatively large, the outliers to be eliminated were detected on the basis of grubbs’ test [6]. testing on the 5% significance level detected individual outliers of measured distances (in the most unfavourable case, 3 outliers were eliminated from a random sample). verification of normality of random samples random samples of repetitive distance measurements are presumed to belong to the population with a normal probability distribution. to confirm this assumption, experimental data are subjected to testing using d’agostino’s k2 test [7]. the result of testing on the 5% significance level confirmed the normality of all tested random samples. homogenity assessment of random samples the verification of homogeneity of random samples of measured distances is one of the possibilities allowing the assessment of the effect of the configuration (the effect of the length of the measured distance and the turning of the reflective surface) on the accuracy rate of a distance measured by a prismless distance meter. the principle of this method consists in the application of statistical homogeneity tests. these tests work with groups k of random samples and verify the null hypothesis that the variances of the populations σ2i , to which random samples belong, are equal to each other h0 : σ21 = σ22 = . . . = σ2g h1 : σ21 6= σ22 6= . . . 6= σ2g . (4) hence, created groups of random samples of measured distances allow testing whether the sample standard deviations of measured distances si (or sample variances s2i respectively) are in correspondence with the joint standard deviation of the distance σ (or the variances of the population σ2 respectively). thus, provided all samples corresponding to the same distance (4 groups with 7 samples each) are sorted out, and provided the null hypothesis is not rejected during the testing of these groups, it may be verified that the accuracy rate of the measured distance does not rely on the turning of the reflective surface (i.e. on the incidence angle of the distance meter’s beam). based on the assumptions about the normality of random samples, bartlett’s test was selected for the testing of homogeneity. the testing criterion of bartlett’s test testing the group k of random samples is the variable b, which is defined by the expression b = [ (n − 1) ln s2c − ∑k i=1(ni − 1) ln s2i ] c , (5) where the constant c equals c = 1 + (∑k i=1 1 ni−1 − 1 n−k ) 3(k − 1) (6) geoinformatics fce ctu 2011 122 trasak, p. et al: testing of the accuracy dependency of prismless distance measurements on the beam incidence angle s2i is the sample variance of the i-th sample of measured distances with a range ni s2i = 1 ni − 1 n∑ j=1 (ixj −i x)2, i = 1, 2, . . . ,k, (7) and s2c is the pooled estimation of the variance expressed in the form s2c = 1 n − 1 k∑ i=1 ni∑ j=1 (ixj −i x)2, n = k∑ i=1 ni. (8) assuming the validity of the null hypothesis and the condition of the minimum number of tested samples ni ≥ 6, the variable b roughly has the distribution χ2. the acceptance or rejection of the null hypothesis about the equality of variances of populations is assessed on the level of significance p(b > χ2α,(k−1)) = α = 0.05, (9) where χ22α,(k−1) is the critical value of the distribution χ 2 with (k − 1) degrees of freedom. provided the value of the testing criterion b exceeds the critical value χ22α,(k−1), the null hypothesis on the level of significance α is rejected and homogeneity of samples is not proved. besides the decision on the rejection or confirmation of the null hypothesis on the selected level of significance α, the testing also results in the determination of the p–value, i.e. the probability at which the testing criterion equals the critical value p(b = χ2p,(k−1)) = p. (10) the determined p–value describes the limit level of significance, i.e. the maximum probability of the rejection of the null hypothesis, despite its validity. the results of testing the homogeneity of groups of random samples of measured distances are displayed in tab. 2. the results of testing the groups of random samples in which the null hypothesis was not rejected on the 5% level of significance during the testing of homogeneity, i.e. in which the effect of the distance meter’s beam incidence angle on the accuracy rate of the distance measured by a prismless distance meter was not proved on the selected level of significance, are written in bold in the table above. confidence interval for the standard deviation of a measured distance another method used for the investigation of the effect of turning the reflective surface on the precision of a measured distance is the method based on the interval estimation of the standard deviation of a measured distance. the method consists in the creation of a two-sided 95% confidence interval for the standard deviation of the measured distance σ. assuming that the random sample of measured distances (serving for the construction of the confidence interval) is taken from the population with a normal probability distribution, the 95% confidence interval is set as p   √√√√ (n− 1)s2 χ21−α/2,n−1 ≤ σ ≤ √√√√(n− 1)s2 χ2 α/2,n−1 ≤   = 1 −α = 0.95, (11) geoinformatics fce ctu 2011 123 trasak, p. et al: testing of the accuracy dependency of prismless distance measurements on the beam incidence angle trimble s6 topcon gpt-7501 distance c χ20.05,6 p-value [m] [%] 5 45.452 12.592 0.0 10 22.737 0.1 20 15.019 2.0 40 94.285 0.0 distance c χ20.05,6 p-value [m] [%] 5 9.305 12.592 15.7 10 7.599 26.9 20 11.881 6.5 40 69.255 0.0 topcon gpt-2006 leica hds 3000 distance c χ20.05,6 p-value [m] [%] 5 45.452 12.592 0.7 10 22.737 80.6 20 15.019 3.6 40 94.285 23.1 distance c χ20.05,6 p-value [m] [%] 5 14.442 12.592 2.5 10 1.792 93.8 20 8.384 21.1 40 15.096 2.0 table 2: results of bartlett’s test of homogeneity of samples where 1−α is the confidence coefficient, n is the sample size, s is the sample standard deviation of a measured distance and χ21−α/2,(n−1) (resp. χ 2 α/2,(n−1) ) is the value of the distribution χ 2 with (n− 1) degrees of freedom. the principle of assessing the configuration effect on the precision rate of distance measurement consists in the sorting out of random samples into individual groups according to the length of measured distances (see sorting into groups in par. "homogenity assessment of random samples"), determination of confidence intervals for the standard deviation of a distance and assessment whether the confidence intervals in individual groups overlap and, therefore, there exists a common interval in which a standard deviation of a distance common for the whole group is found. in the case that such an interval is found, we may claim that random samples are in correspondence with the same standard deviation of a distance and that the precision rate of distance measurement is not affected by a change in turning the reflective surface at a certain measured distance. the occurrence of intervals of a potential common standard deviation was studied only graphically, and the results are displayed in fig. 4. the results in the figure above correspond to a greater part to the results obtained during the testing of homogeneity of samples (see par. "homogenity assessment of random samples"). setting a linear change in the standard deviation of a measured distance with respect to the linearly growing incidence angle of the distance meter’s beam to the reflective target surface, according to a speculated, generally recognized opinion (see par. "introduction"), a linear or another monotonous growth pattern of the standard deviation of a measured distance σ may be assumed. based on this assumption, the dependence between the incidence angle of the distance meter’s beam and the accuracy rate of a distance measured by a prismless distance meter may be investigated using the regression analysis method. the geoinformatics fce ctu 2011 124 trasak, p. et al: testing of the accuracy dependency of prismless distance measurements on the beam incidence angle figure 4: 95% confidence interval for the standard deviation of a distance principle of this method consists in the plotting of a regression line s = aω + b (12) through a set of sample standard deviations of the distance s and assessment of the significance of the slope a of this line representing a linear change in the sample standard deviation s in relation to the changing angle of the turning of the reflective surface ω. using the least squares method the estimation of the slope of the regression line in the form â = ∑n i=1(ωi −ω)si∑n i=1(ωi −ω)2 , (13) where si is the sample standard deviation and ωi is the incidence angle of the distance meter’s beam for the i-th turning of the reflective surface, n is the number of turnings of the reflective surface (n = 7) and ω is the sample mean incidence angle of the distance meter’s beam. the significance of a change in the sample standard deviation s in relation to a change in the turning of the reflective surface ω may be assessed on the basis of the statistical hypothesis test where the null hypothesis of the null slope of a regression line is investigated h0 : a = 0, h1 : a 6= 0. (14) geoinformatics fce ctu 2011 125 trasak, p. et al: testing of the accuracy dependency of prismless distance measurements on the beam incidence angle the testing criterion is defined by the relation t = |â| sâ , (15) where sâ is the estimation of the standard deviation of the regression line slope estimation sâ = sr√∑n i=1(ωi −ω)2 (16) where sr is the residual standard deviation expressed in the form sr = √∑n i=1(si − (âωi + b̂))2 n− 2 . (17) the acceptance or rejection of the null hypothesis of the null slope of the regression line is assessed on the level of significance p(t > tα/2,n−2) = α = 0.05, (18) where tα/2,n−2 is the critical value of student’s distribution t with n− 2 degrees of freedom. if the value of the testing criterion t exceeds the critical value tα/2,n−2, the null hypothesis on the level of significance α is rejected and the null value of the slope of the regression line is not proved. p – the value for the test of the slope of the regression line equals the probability p(t > tp/2,n−2) = p. (19) the resulting values of changes in sample standard deviations, together with the results of the testing, are shown in tab. 3. the results of testing where the null hypothesis was not rejected on the 5% level of significance during the testing, i.e. in which a null linear change in the sample standard deviation was proved on the selected level of significance, are written in bold in the table above. as the results imply, the non-effect of the unfavourable trend in the accuracy evolution of prismless distance measurement was proved in the absolute majority of cases. graphic interpretation of precision development of a measured distance to simplify the interpretation of the precision development of prismless measurement of distances in relation to a selected measurement configuration (see above), the resulting sample standard deviations of the distance s determined from repetitive measurements were displayed in three-dimensional graphs, both in relation to the growing measured distance and the growing incidence angle of the distance meter’s beam (i.e. turning of the reflective surface). to get a more illustrative picture of the precision development of measured distances, the set of sorted out standard deviations was approximated on a plain surface (fig. 5). the approximation was performed using the bicubic interpolation method. geoinformatics fce ctu 2011 126 trasak, p. et al: testing of the accuracy dependency of prismless distance measurements on the beam incidence angle trimble s6 distance â sâ t t0.025,5 p-value [m] [µm/gon] [µm/gon] [%] 5 2.9 0.4 8.119 2.571 0.0 10 1.0 0.7 1.423 21.4 20 0.7 0.5 1.396 22.2 40 3.3 1.4 2.384 6.3 topcon gpt-7501 distance â sâ t t0.025,5 p-value [m] [µm/gon] [µm/gon] [%] 5 0.6 1.1 0.572 2.571 59.2 10 0.5 1.0 0.485 64.8 20 0.5 1.2 0.404 70.3 40 4.8 3.5 1.394 22.2 topcon gpt-2006 distance â sâ t t0.025,5 p-value [m] [µm/gon] [µm/gon] [%] 5 0.8 1.7 0.501 2.571 63.8 10 1.4 0.5 2.966 3.1 20 1.4 1.5 0.927 39.6 40 -1.1 1.2 0.982 37.1 leica hds 3000 distance â sâ t t0.025,5 p-value [m] [µm/gon] [µm/gon] [%] 5 6.9 1.4 4.983 2.571 0.4 10 2.9 1.3 2.214 7.8 20 3.1 2.3 1.348 23.6 40 0.4 4.6 0.088 93.4 table 3: assessment of linear changes in sample standard deviations conclusion in the branch of engineering geodesy and laser scanning, it is currently generally recognized that the precision rate of distance measurement obtained by a prismless distance meter falls with the growing incidence angle of the distance meter’s beam to the reflective surface of the target object. therefore, a measuring experiment was performed to confirm or disprove the validity of this opinion. the results reached during the experiment (displayed in tab. 2, tab. 3 and in fig. 4, fig. 5) lead to a conclusion that the incidence angle of the distance meter’s beam to the reflective surface has no effect on the standard deviation of a distance measured by a prismless distance geoinformatics fce ctu 2011 127 trasak, p. et al: testing of the accuracy dependency of prismless distance measurements on the beam incidence angle figure 5: graphic display of precision development of distance measurement by a prismless distance meter meter. the values of sample standard deviations of measured distances (determined from repetitive measurements) show random fluctuations and do not imply any systematic drop in the accuracy rate of prismless measurement of distances. an exception is the case of a rapid growth in the sample standard deviation of a distance at a measured distance of 40 m and turning of the reflective surface by 90 gon using trimble s6 and topcon gpt-7501 measuring instruments. this growth, however, cannot be included in the final conclusions of the experiment as it represents an extremely unfavourable case of distance measurement. assuming a cone-shaped divergence of the distance meter’s laser beam, the magnitude of the laser track at a distance of 40 m is so large and the width of the tested reflective surface in the direction of a falling beam at a turning by 90 gon is so small that it cannot be guaranteed that the whole distance meter’s beam will be reflected from this board. the results of measurement in this case are highly untrustworthy. no significant systematic (linear) deterioration in the accuracy rate of prismless distance measurement in relation to the growing incidence angle of the distance meter’s beam to the reflective target surface was proved on the basis of regression analysis of measured data. besides the principal objective of the experiment, another new finding was made while comparing the values of sample standard deviations of measured distances with corresponding standard deviations of measurement stated by the instrument manufacturers (σdb, see tab. 1). these sample standard deviations describe the random component of the total standard deviation σdb and are significantly lower in value than the standard deviations. we may, therefore, assume that the manufacturers consider a relatively high systematic component of the standard deviation of a distance, which, of course, is not manifested in repetitive distance measurements. to conclude, we may state that the generally recognized opinion of the effect of the distance meter’s beam incidence angle (turning of the target reflective surface) on the accuracy rate geoinformatics fce ctu 2011 128 trasak, p. et al: testing of the accuracy dependency of prismless distance measurements on the beam incidence angle of distances measured by a prismless distance meter is not correct for distances measured in an interval of 0 m – 40 m. this effect was not manifested during the performance of the measuring experiment. the article was written with support from research plan msm 6840770001 “reliability, optimization and durability of building materials and constructions”, partial task “geodetic monitoring ensuring the reliability of structures”. references 1. corporate literature for instrument trimble s6 (in czech)1 2010-05-15. 2. corporate literature for instrument topcon gpt – 7501 (in czech). http://obchod. geodis.cz/geo/gpt-7500. 2010-05-15. 3. štroner, m. suchá, j. pospíšil, j.: verification of characteristics of total stations topcon gpt-2006 – part 1 (in czech). stavební obzor. 2007, vol. 16, no. 2, pp. 45-48. issn 1210-4027. 4. corporate literature for instrument leica hds 3000. http://hds.leica-geosystems. com/en/5574.htm. 2010-05-15. 5. böhm, j. – radouch, v. – hampacher, m.: theory of errors and adjustment calculus (in czech), 2nd edition, praha, geodetický a kartografický podnik 1990. /isbn 807011-056-2/. 6. eckschlager, k. – horsák, i. – kodejš, z.: assessment of analytical results and methods (in czech), 1st edition, praha, sntl – nakladatelství technické literatury 1980. 7. d’agostino, r. b. – belanger, a. – d’agostino, jr., r. b.: a suggestion for using powerful and informative tests of normality. the american statistician, vol. 44, no. 4, pp. 316–321, 1990. 1http://www.geotronics.cz/index.php?page=shop.product_details&flypage=flypage.tpl&product_ id=4&category_id=15&option=com_virtuemart&itemid=7 geoinformatics fce ctu 2011 129 http://obchod.geodis.cz/geo/gpt-7500 http://obchod.geodis.cz/geo/gpt-7500 http://hds.leica-geosystems.com/en/5574.htm http://hds.leica-geosystems.com/en/5574.htm http://www.geotronics.cz/index.php?page=shop.product_details&flypage=flypage.tpl&product_id=4&category_id=15&option=com_virtuemart&itemid=7 http://www.geotronics.cz/index.php?page=shop.product_details&flypage=flypage.tpl&product_id=4&category_id=15&option=com_virtuemart&itemid=7 geoinformatics fce ctu 2011 130 ________________________________________________________________________________ geoinformatics ctu fce 2011 18 reconstruction of eroded and visually complicated archaeological geometric patterns: minaret choli, iraq rima al ajlouni1, petr justa2 1texas tech university, college of architecture box 42091, lubbock, texas 79409-2091, usa rima.ajlouni@ttu.edu 2gema art group haštalská 27, 110 00 prague 1, czech republic, justa@gemaart.cz keywords: reconstruction, documentation, islamic geometric patterns, archaeological ornaments, minaret choli abstract: visually complicated patterns can be found in many cultural heritages of the world. islamic geometric patterns present us with one example of such visually complicated archaeological ornaments. as long-lived artifacts, these patterns have gone through many phases of construction, damage, and repair and are constantly subject to erosion and vandalism. the task of reconstructing these visually complicated ornaments faces many practical challenges. the main challenge is posed by the fact that archaeological reality often deals with ornaments that are broken, incomplete or hidden. recognizing faint traces of eroded or missing parts proved to be an extremely difficult task. this is also combined with the need for specialized knowledge about the mathematical rules of patterns’ structure, in order to regenerate the missing data. this paper presents a methodology for reconstructing deteriorated islamic geometric patterns; to predict the features that are not observed and output a complete reconstructed two-dimension accurate measurable model. the simulation process depends primarily on finding the parameters necessary to predict information, at other locations, based on the relationships embedded in the existing data and in the prior -knowledge of these relations. the aim is to build up from the fragmented data and from the historic and general knowledge, a model of the reconstructed object. the proposed methodology was proven to be successful in capturing the accurate structural geometry of many of the deteriorated ornaments on the minaret choli, iraq. however, in the case of extremely deteriorated samples, the proposed methodology failed to recognize the correct geometry. the conceptual framework proposed by this paper can serve as a platform for developing professional tools for fast and efficient results. 1. introduction the choli minaret is dated to the atabag period (1190-1232), during the reign of muzaffaruddin al-kawkaboori, the king of erbil. the structure was built from low burnt bricks and gypsum based renders and mortars. due to the long term effect of deterioration, the essential part of the architecture has disappeared and thus the leaning minaret is the last survival of the past mosque. the current shape of the minaret covers the lower, heptagonal/octagonal base (about 12 m high) and the upper broken cylindrical part (about 24 m high) with double spiral staircase inside. a large extent of precious historic fragments of renderings and embossments were identified in the lower part of the object. particularly large scale findings of egyptian blue ceramic decoration in niches are considered as utmost important. all fragments of decorations were seriously affected by weathering and mechanical damages (figure 1). to preserve these delicate artifacts, it is important to understand their geometry, mathematics and generating principles. producing accurate virtual reconstructions of these ornaments can help professionals to test the different preservation scenarios and make the best decisions. however, the task of reconstructing these deteriorated patterns faces many practical challenges. the main challenge is posed by the fact that these patterns are broken, incomplete or hidden. recognizing faint traces of eroded or missing parts proved to be an extremely difficult task. this is also combined with the need for specialized knowledge of patterns‟ structure, in order to build up from the fragmented data and from the historic and general knowledge, an accurate model of the deteriorated ornament. this paper presents a methodology for reconstructing these deteriorated geometric patterns; to predict the features that are not observed and output a complete reconstructed two-dimension accurate measurable model. the simulation process depends primarily on finding the parameters necessary to predict information, at other locations, based on the relationships embedded in the existing data and in the prior-knowledge of these relations. by incorporating the mathematical rules of patterns‟ structure, this process involves creating a hypothetical geometrical model, which can be fitted to the available partial data to fill in the missing gaps. the ________________________________________________________________________________ geoinformatics ctu fce 2011 19 reconstruction process is designed to measure certain patterns‟ variables, which are used within a mathematical formula to produce the whole system. figure 1: the choli minaret, iraq, a: an image showing the lower part of the minaret, b: rectified images of the different ornaments under investigation. 2. literature review 2.1 islamic geometry the rise of islamic culture in the seventh century has marked the beginning of a new artistic, decorative and sacred tradition [1]. this artistic tradition was completely inspired by a deep religious philosophical and cosmological approach, which embodied all aspects of life and manifested itself in every product [2,3]. the use of geometric patterns is one of the chief characteristics that give the islamic artistic heritage its distinct identity. for more than thirteen centuries they acted as unifying factors. they have linked the architectural products from all over the islamic world, extending across europe, africa and asia [4,5]. geometry as an abstract art form was developed in part due to the discouragement of images in islam on basis that it could lead to idolatry [6]. these visually diverse formations grow out from the same spiritual origin to represent the multiple manifestation of the divine [7,8]. islamic geometric patterns were applied to all kinds of materials: metal work, woodworks, ceramics, textiles, carpets, stone, fabric and miniatures. the universal application of these patterns implies that they were created based on solid formal methods. the act of designing and applying these patterns was considered a form of worship and encapsulated a divine religious experience. these artists and the methods they used were secretive, and only few passed on this tradition until it was lost [9,10]. the vast variety of geometric formation and the strict rules of its generation reveal an important inner dimension of islamic tradition: “unity in multiplicity and multiplicity in unity” [11]. islamic designs were constructed by using a compass and a straight edge; therefore the circle becomes the foundation for islamic patterns [12,13,14]. this “conventional” method emphasizes the symbolic relationship between the global dimension and its center. the generating force of patterns lies in the center of the circle, which represents the point at which all islamic patterns begin. it is the symbol of a religion that emphasizes one god, the center of universe [15,16]. 2.2 periodic islamic patterns this decorative art is generated from a discrete geometrical unit using the circle as its basis, and then applying the principles of repetition, and symmetry to it [17,18,19]. although each pattern has its own distinct geometrical design, the vast varieties of ornamental compositions are based on a simple constitutive geometry, which is generated from a limited number of simple base grids of polygons [20]. mathematically these grids are known as regular tessellations, in which one regular polygon is repeated to fill the plane. the main basic grids are: a) basic grids based on the equilateral triangles and hexagons and its multiples. b) basic grids based on the squares, octagons and their multiples. c) basic grids based on the pentagon and its multiples. d) basic grids based on nine fold. e) basic grid based on seven fold. f) basic grids based on eleven fold. g) basic grids based on a combination of basic categories. figure 2 demonstrates the construction process of an octagon-square based pattern. ________________________________________________________________________________ geoinformatics ctu fce 2011 20 2.3 aperiodic islamic patterns recently, islamic patterns, with five-fold symmetries and non-periodic geometry, have been discovered in some medieval islamic patterns [21]. these non-periodic patterns exhibit a complicated long-range translational order that is not periodic and a long-range orientational order that does not have rotational point symmetry. this discovery has attracted a significant scientific interest into understanding the structural principles of islamic formations. some scientists have suggested that by the 1200 c.e. a conceptual breakthrough occurred in the way that muslim artists conceived and implemented these patterns; shifting from the conventional-global method of using a compass and a straight edge, to a new perception, in which islamic patterns were reconceived as tessellations of different types of prescribed tiles [22]. these scientists indicated that building theses complex non-periodic patterns would have required the application of a complex set of mathematical rules, which they believe, are beyond the grasp of the muslim artists. others have argued against this localized tiling system, which is challenging centuries of proven “conventional” knowledge as well as doubting the conceptual abilities of the muslim artists to comprehend the global construction rules of these complex patterns. inspired by the conventional view of islamic geometry, recently, the first global construction method that is able to describe the long range order of aperiodic formations was proposed [23]. al ajlouni (2011), proposed a global multi-level hierarchical framework that is able to describe the long-range translational and orientational order of aperiodic formations. the proposed model shows that geometric arrangements of the non-periodic formations are determined entirely by one hierarchical framework, which works in perfect concert with the “conventional view” of pattern geometry. it suggests that the position of geometrical units, locally and globally, is defined by one global framework, and not tiled based on local tiling system. this hierarchical principle, presents a new methodology for understanding and constructing complicated islamic patterns, which might generate a new perspective into the history, craft, construction and creativity of the muslim artists. figure 2: the construction process of an octagon-square based pattern 2.4 the choli ornaments [24] islamic architecture and its decorative ornamentalism form an integral unity. the same attitude was applied to the construction of the choli minaret in erbil. due to the restrictions in islamic art namely absence of figural art and elimination of certain creativity, decorative design has developed itself to the perfection in the middle ages. this art was in fact in its complexity an applied science, which in the fields of mathematics, geometry and optics reached a high level at that time. islamic ornamentalism widely used spatial zoning and contrasts of light and shadow and emphasised the use of blue colour, the colour of water. all these elements were abundantly applied in the decorative composition of the choli minaret. the minaret makes use of the contrast of light and shadow in the spatial arrangement of the brick “mosaic” of the outer surface/shell and blue colour in ornaments in the minaret niches and dividing strips, using glazed tiles. the outer shell ornament formed regular patterns composed of the sizes of bricks. the design of niches and dividing strips also stemmed from the sizes of bricks and their parts, which were complemented with geometric shapes made of glazed tiles (figure 1). the comprehensive conservation treatment of the minaret was implemented in 20082009 covering also the partial reconstruction of niches. all fragments of ceramic, brick and stucco decorations were seriously affected by weathering and mechanical damages. the principle of intervention was kept in terms of pure conservation of the historic landmark. the stabilization of the whole structure and conservation treatments of all surfaces were carried out without the tendency to reconstruct missing and unknown parts of the minaret. the only exception was the area of lower niches where pure conservation would be confusing from the architectural point of view. the inside area of all niches has been covered by mortar imitating the under layer for ceramic tiles. all remaining original parts were carefully consolidated and retouched. the inner vaults of niches were reconstructed to its original shape to make the architectural frame better identified. samples of glazed tiles were tested for the purpose of determination of the type of terracotta. all samples displayed same results and proved that the ceramic body was made from ceramic material (terracotta) burned at lower temperatures. the coloured decoration layer is enamel, perhaps with low melting enamel coloured by cucompounds. the cross section of studied tiles a clear bottom white vitreous layer containing silica was documented. the presence of approx. 18 %, sodium, 65 %, silica, potassium, calcium and copper was proved to be present in the blue ________________________________________________________________________________ geoinformatics ctu fce 2011 21 vitreous substance. the production of so-called egyptian blue glaze with copper oxides in strongly alkaline medium, was relatively frequent in the area. the first glazes documented in the middle east were decorations at ishtar gate in babylon. 3. research design the research at hand encapsulates two approaches within its methodology. qualitative approach is evident in the process of reading and interpreting the surviving fragments as well as evaluating of the final reconstructions. quantitative approach is manifested through using mathematical knowledge of pattern generation to produce the geometric models, which are used to simulate the missing information from the partial data. although this research is generally based on deductive logic, it still uses induction to generalize output beyond the observed instances. the reconstruction process involves “induction, deduction and analogy”. this research follows an empirical paradigm in testing its methodology and uses experimentation as its main strategy. the proposed methodology includes three main tasks. 3.1 photogrammetry and image rectification [25] calibrated digital cameras, réseau photogrammetric camera rolleimetric 6006 and total station were used for the basic documentation of the choli minaret. the geodetic measurements were taken at a temperature of around 45ºc. the total station experienced some problems with the lcd display under this climate. a small provisional geodetic network consisting of 4 points was stabilized by the usage of nails and temporary marks. first of all, network point adjustments have been made on site. preliminary calculations have been carried out in erbil for control with satisfactory results of about 7mm in position. all the necessary control points and object points were measured with accuracy of about 1-2 cm in position. altogether approximately 250 points were calculated. sets of 25 digital photogrammetric images have been taken using the canon 20d digital camera with a resolution of 8 mp. this particular camera was calibrated by using photomodeler software in the laboratory of photogrammetry at the czech technical university in prague. two zoomequipped lenses used for imaging were calibrated on focal lengths 10mm, 22mm and 17mm, 85mm respectively. upon completion of the expedition all images taken on the site were scanned using a professional film scanner (nikon coolscan 8000) at true 2500dpi. the outdoor parts of the choli minaret were measured and processed by using intersection terrestrial photogrammetry in the photomodeler software (residuals on control points were about 2cm. all construction and editing of photogrammetrical measured items were processed in autocad. 3.2 reading and interpreting the surviving fragments a periodic geometric pattern is defined mathematically as “a planar arrangement of line segments that together delineate copies of a small number of different shapes” [26]. to understand these patterns mathematically, it is important to study their abstractions; through which all rendering effects and colors are discarded and only the basic line structure is kept. the process starts by mapping the basic line structure of the surviving line fragments through extracting their center lines. the produced skeleton represents the minimum information needed to preserve the general structural of the available data. this abstraction is done manually and with the aid of autocad tools. these line abstractions are then used as the basic template for pattern analysis and interpretation. the interpretation process involves deconstructing these abstract geometric formations to their elementary components, and then investigating the rules that organize the relations between geometric primitives (points and lines). the definition of certain intersection points, edges and shape symmetries are essential to the interpretation process. in this process certain parameters, within the pattern‟s structure, are defined, measured and then used to identify the generating basic grid and the repeated star units. different mathematical models are then tested to see which one fits the available parameters. these models are checked against the surviving fragments to define the correct match. a) defining and generating the basic grid periodic islamic patterns are constructed based on a limited number of basic grids, which are generated fro m patterns of circles [27]. the complex geometric patterns are all elaborations of simpler constructions of circles, which are often used to determine the basic grids [28,29]. the process of defining the basic grid starts by locating all points of shape symmetry within the pattern. these points represent the center points of the circles that constitute the basic grid. the structure around this area is then mapped to define the polygon edges and divisions. the careful arrangement of these polygons generates the general structure of the basic grid. figure 3a shows the basic grid of 14 folds and 11 folds, generated for one deteriorated ornament on the choli minaret. b) defining and generating the repeated star units. defining the repeated star unit involves interpreting the line information contained within each polygon. each repeated star unit is constructed using single polygon. the star unit is formed by an array of lines connecting either the ________________________________________________________________________________ geoinformatics ctu fce 2011 22 intersection points of the polygon‟s sides, or connecting the mid points of the polygons sides. the repeated star can be as simple as one array of lines or a combination of two or more simple stars (arrays). figure 3b demonstrates the construction process of the two repeated star units used to generate the sample pattern on the choli minatare. 3.3 generating the final reconstruction the final reconstruction is generated by combining the repeated star units and the basic grid of polygons. as shown in figure 4a, the fourteen-fold star unit and the eleven-fold star unit are inserted into the grid of polygons, guided by the lines of the surviving fragments. star unit are rotated and positioned according the laws of pattern‟s symmetry. each of these star‟s arrays are then extended beyond the edges of their polygon to meet other arrays and form the connection areas between all polygons (figure 4b). the final step involves trimming the edges of the pattern to fill the frame of the original ornament and adding a thickness to the line pattern to match the original pattern (figure 4c). a b c figure 3: the process of defining and generating the basic grid and the repeated star unit a b c figure 4: the process of generating the final reconstruction ________________________________________________________________________________ geoinformatics ctu fce 2011 23 4. results and discussion the proposed methodology was tested on twelve different deteriorated ornaments on the choli minaret. the proposed methodology was successful in capturing the accurate structural geometry of nine patterns (figure 5). all of which are periodic patterns. however, in the case of extremely deteriorated samples, the proposed methodology failed to reconstruct the patterns; in most cases, surviving evidence was not sufficient to read the basic grid or the repeated star unit. however, in one case, where the surviving evidence seemed to be sufficient, the deteriorated fragments did not follow a periodic logic and therefore the methodology failed to render any results. based on these results, three basic grids were identified; five patterns were generated based on six folds symmetries. all of these patterns where located on the upper part of all niches. three patterns were generated based on four and eight folds and one pattern was generating based on a combination of eleven and fourteen folds. all of which were located in the lower part of the niches. figure 5: the final reconstructed patterns of the sampled data 5. conclusions the proposed methodology was proven to be successful in capturing the accurate structural geometry of the sampled data. the key challenge was evident in arriving at accurate interpreting of the surviving data. in addition, the need for specialized knowledge, skills and deep understanding of pattern generating principles are crucial to the analysis and testing of the different reconstruction options. a deeper understanding of the generating principles of different types of islamic patterns (i.e., periodic, aperiodic, etc.) is much needed. more investigation into their mathematics, techniques, craft and symbolic significance is essential to arriving at the best preservation decision. the proposed methodology relies heavily on the subjective nature of our perceptual power in understanding shape complexity and depicting its color differences. the problem with such methods is related to the subjective and limited human ability of recognizing faint traces of subtle color evidence. digital techniques offer many advantages over the human eye in terms of ________________________________________________________________________________ geoinformatics ctu fce 2011 24 recognizing subtle differences in light and color. the conceptual framework proposed by this paper can serve as a platform for developing digital pattern recognition tools for fast and efficient results. 6. references [1] stirlin, h.: islam: early architecture from baghdad to cordoba, london: taschen, 1996. [2] kritchlow, k.: islamic patterns: an analytical and cosmological approach, new york: thames & hudson inc, 1976. [3] al-bayati, b.: process and pattern: theory and practice for architectural design in the arab world, london: flexiprint ltd, 1981. [4] jones, d.: the elements of decoration: surface, pattern and light, architecture of the islamic world. its history and social meaning, 144-175. edited by g. michell. london: thames & hudson ltd., 1978. [5] kaplan, c., salesin, d.: islamic star patterns in absolute geometry, new york: acm press, 2004. [6] danby, m.: moorish style, london: phaidon press limited, 1995. [7] kritchlow, k.: islamic patterns: an analytical and cosmological approach, new york: thames & hudson inc, 1976. [8] al-bayati, b.: process and pattern: theory and practice for architectural design in the arab world, london: flexiprint ltd, 1981. [9] abas, s., salman, a.: geometric and group-theoretic methods for computer graphic studies of islamic symmetric patterns, computer graphics forum 11, no. 1 (1992): 43-53. [10] el-said, e.: islamic art and architecture: the system of geometric design, (1st ed.). reading, england: garnet publishing limited, 1993. [11] jones, d.: the elements of decoration: surface, pattern and light, architecture of the islamic world. its history and social meaning, 144-175. edited by g. michell. london: thames & hudson ltd., 1978. [12] kritchlow, k.: islamic patterns: an analytical and cosmological approach, new york: thames & hudson inc, 1976. [13] jones, d.: the elements of decoration: surface, pattern and light, architecture of the islamic world. its history and social meaning, 144-175. edited by g. michell. london: thames & hudson ltd., 1978. [14] el-said, e.: islamic art and architecture: the system of geometric design, (1st ed.). reading, england: garnet publishing limited, 1993. [15] kritchlow, k.: islamic patterns: an analytical and cosmological approach, new york: thames & hudson inc, 1976. [16] al ajlouni, r.: digital pattern recognition in heritage recording: an automated tool for documentation and reconstruction of visually complicated geometric patterns. -verlag-dm: germany, 2009. [17] kritchlow, k.: islamic patterns: an analytical and cosmological approach, new york: thames & hudson inc, 1976. [18] el-said, e.: islamic art and architecture: the system of geometric design, (1st ed.). reading, england: garnet publishing limited, 1993. [19] al ajlouni, r.: digital pattern recognition in heritage recording: an automated tool for documentation and reconstruction of visually complicated geometric patterns. -verlag-dm: germany, 2009. [20] gonzalez, v.: beauty and islam: aesthetic in islamic art and architecture, london: islamic publications ltd, 2001. [21] lu, p., steinhardt, p.: decagonal and quasicrystalline tilings in medieval islamic architecture, science (2007). [22] lu, p., steinhardt, p.: decagonal and quasicrystalline tilings in medieval islamic architecture, science (2007). [23] al ajlouni, r.: a long-range hierarchical clustering model for constructing perfect quasicrystalline formations, philosophical magazine, structure and properties of condensed matter. first published on: 25 january 2011 (ifirst). [24] justa, p.,houska, m., the conservation of minaret choli, erbil, iraq, proceedings from the iic congress, istanbul, turkey, 2010 [25] pavelka, k. svatušková, j. králová, v.: photogrammetric documentation and visualization of choli minaret and great citadel in erbil/iraq, in cipa athens 2007, p. 245-258. [26] kaplan, c.: computer generated islamic star patterns, http://www.cgl.uwaterloo.ca/~csk/washington/taprats/bridges2000.pdf. 2000. [27] gonzalez, v.: beauty and islam: aesthetic in islamic art and architecture, london: islamic publications ltd, 2001. [28] kritchlow, k.: islamic patterns: an analytical and cosmological approach, new york: thames & hudson inc, 1976. [29] el-said, e.: islamic art and architecture: the system of geometric design, (1st ed.). reading, england: garnet publishing limited, 1993. atcontrol software for leica at40x laser trackers filip dvořáček czech technical university in prague department of special geodesy czech republic filip.dvoracek@fsv.cvut.cz abstract the paper describes a software called atcontrol which is based on the matworks matlab high-level programming language. this software is under constant development by the author in order to collect geospatial data by measuring with the absolute laser tracker leica at40x (at401, at402). commercially available software solutions are shortly reviewed and the reasons for developing the new controlling application are discussed. advantages of atcontrol concerning metrological traceability of measured distances are stated. key functional features of software are introduced. keywords: atcontrol, controlling software, leica geosystems, laser tracker, leica at401, leica at402, mathworks matlab, emscon server, length metrology 1. introduction this article describes a user-programmed controlling application for leica at40x absolute laser trackers. as long as the original at401 and the updated at402 versions share the same system software (firmware), atcontrol [3] is compatible with both devices. since publishing an article about system software errors of leica at40x instruments [5], where atcontrol was mentioned as a software used for testing, several inquiries from readers about the application have occurred. therefore, this paper should answer some of these questions and introduce software to others as well. it do not substitutes an operational manual. the idea is to briefly evaluate available controlling applications with respect to geodesy, to provide an overview of the problematic of programming and to point out specific features of atcontrol. commercially available software it can be stated at the beginning that there is a lack of software for at40x which is suitable for classical geodesy. in fact, the author does not know a single one which the main purpose is geodesy in general rather than industrial metrology. on the other hand, it is necessary to admit that the primary purpose of laser trackers is industrial metrology rather then geodesy. but surveyors and metrologists also want to benefit from such state-of-the-art equipment. software, supplied for free with the at40x, is called leica tracker pilot. it is capable of measuring and displaying fundamental geodetic quantities. version 1.x did not allow data saving at all, version 2.x allows it but only manually and with a potential data-lost risk. therefore, it is believed that tracker pilot is not fully determined to be used for serious measurement tasks, but rather for demonstrational purposes only. the main aim of software geoinformatics fce ctu 14(2), 2015, doi:10.14311/gi.14.2.2 9 http://orcid.org/0000-0003-4336-056x http://dx.doi.org/10.14311/gi.14.2.2 http://creativecommons.org/licenses/by/4.0/ f. dvořáček: atcontrol software for leica at40x laser trackers is to mediate firmware updating, reflector and compensation file administration and field and compensation test procedures. applications for industrial metrology are mainly extensive solutions designed for use in heavy industry, e.g. car and aircraft manufacturing. components and final products are checked to see if they are manufactured according to specific requirements. even if the same quantities as lengths and angles are indispensable to be measured, the difference between classical geodesy and the industry-specific use of the instrument is significant. the deal is that advanced functions and lack of fundamental operations in commercial software prevents it from being used for general-purpose measurement in geodesy. the most important basic operation on the author’s mind is a saving of raw measured data such as horizontal and vertical angles and distances. licensed polyworks v.12 obtained with at401 and a downloaded demo version of spatialanalyzer [11] were not capable of such an action at the beginning of atcontrol development. furthermore, it was not possible to save important data (e.g. internal standard deviations, environment conditions). maybe some additional extensions would allow it but it has never been tried yet. the main reason has been finance resources. it seems not to be very efficient to buy a many thousand-euro software, not to utilize its advanced functions, but still be forced to buy an extension for enabling very simple features. software developed for industrial metrology usually exploit the instrument only as a machine for a coordinate measurement and controlling of the instrument is different from what surveyors are looking for. e.g. performing multiple repetitions and two face measurements are not always common actions during industrial measurement where a speed is important when hundreds of points have to be checked. for purposes of calibrating the czech state long distances measuring standard koštice, an application called geotracker [6] was developed by ing. pavel hánek, ph.d. in research institute of geodesy, topography and cartography. it is written in microsoft visual c#, graphical user interface (gui) is in czech language and it is not available for download in a full or a demonstrational version. software is rather simple but enables measuring and saving features discussed in the paragraph above. recently, the finnish a.m.r. company developed dcp pocket application which is able to control at40x trackers. the application has not been tested because only 3d coordinate measurement is mentioned in dcp pocket 1.1 brochure. 2. design of atcontrol atcontrol is a matworks matlab application with a gui in english language. the application consists of over 30 .m files and over 4000 lines of the source code in the main .m file. an up to date 32-bit or 64-bit matworks matlab or matlab runtime environment is required to run the compiled executable file. leica´s programmer´s manual [7] is used while creating the application and provided libraries (atl com – active template library, component object model) are used for establishing connection and placing commands to the instrument. the application has a synchronous interface with a few command which are asynchronous by its nature (e.g. querying reflectors and compensations, obtaining transformation points, error reporting). synchronous interface means that commands are performed by the emscon server in the same order as they were received. the correct license file has to be present in installation folder to enable full functionality geoinformatics fce ctu 14(2), 2015 10 f. dvořáček: atcontrol software for leica at40x laser trackers of atcontrol. the secondary method how to unlock all features is to enter a correct pin code. without the license or the pin code, saving of measured data is restricted and device´s initialization forbidden. sensor can be initialized in any other software (e.g. tracker pilot) and then atcontrol is useable. the licensing policy is currently not set because of a potential financial profit but in order to monitor current software users and get in touch with them. as the base of users is expected to be very small, contacts with them are valuable with regard to the future testing and application´s enhancements. even if users often require to obtain application´s source codes, it has to be stated that scripts are not provided or published by the author in order to eliminate unwanted errors and modifications introduced by users. of course, the installation routine of atcontrol is avoidable, but it is very convenient for a standard user. the installation cabinet is prepared under the advanced installer software and it is able to automatically perform several important tasks. 1. to download and installs matlab environment if it is not already available. 2. to write atcontrol entries to the operating system´s registry. 3. to copy all needed files in program files/atcontrol folder. 4. to copy setting files in user/appdata/local/atcontrol. 5. to register atcontrol.dll and ltvideo2.ocx (possibly others) libraries in the operating system. 6. to create a start menu entry and a desktop shortcut. 7. to prevents of multiple installations of atcontrol. the gui is designed to be simple but in order to provide an access to all functions as quickly as possible. it is expected to satisfy needs of common surveying and laboratory testing. the graphical design is subject to changes as new functions are added to software. the upper menu bar, the icon bar and the main program window can be used to run desired function and control the laser tracker. there is also a large data window designated to display some measurement data (no., station, target, val1, val2, val3, stdval3, hd, vd, t, face). info box shows information, warnings and errors if an expected/unexpected event occurs. the first usable version (v. 0.9) of atcontrol has been issued on 2013-05-03. since then, 17 updated versions have been produced. errors in leica´s firmware concerning the air refraction has been eliminated since 2013-06-21 (v. 1.5). version 3.3 is currently the latest stable version of atcontrol issued on 2015-08-13. 3. programming for programming, low-level c language can be used along with more common c++ and c# languages. furthermore, high-level com (component object model) interface called tracker programming interface (tpi) is convenient for creating applications using visual basic, vba (visual basic for applications), matlab and also c++ and c# under microsoft windows. at40x software development kit (sdk) is available for downloading on leica´s websites after a free registration. the package of about 100 mb contains an extensive set of files but only a few of them are actually needed for a chosen programming language. the most geoinformatics fce ctu 14(2), 2015 11 f. dvořáček: atcontrol software for leica at40x laser trackers figure 1: the gui of atcontrol 1.9 valuable file is a dll library called ltcontrol.dll which contains a database of all of the common commands used for controlling at40x instrument through the com interface. it has to be registered in windows with regsvr32 command. the matlab’s high-level programming language is not suitable for developing a professional commercial software, however it is a very powerful and widespread multipurpose tool in research. many pre-programmed scripts, which are build-in matlab, allows the programmer to concentrate on software functions rather than on the time-consuming programming routine. the possibility to choose the matlab system as the programming language has not been available since the very beginning of at401´s production but it has been added later by the manufacturer. a short example script for matlab, attached to the sdk package, was used to get an idea about how communication between matlab and emscon server works. furthermore, new commands and their meanings can been found out in the programmer´s manual [7]. in over 500 pages, several errors could be found, but it is understandable considering the amount of content. unfortunately, no straightforward email contact to report errors or ask questions has been found. in case of a problem caused by an error in the manual, a list of all methods of geoinformatics fce ctu 14(2), 2015 12 f. dvořáček: atcontrol software for leica at40x laser trackers figure 2: the gui of atcontrol 3.3 ltcontrol.dll library can be called in order to see at least a number of variables for a specific command. going through instrument´s operational manual [10] helps sometimes too. 4. at40x firmware errors one of things, which differentiates atcontrol from other existing commercial and userprogrammed software solutions, is that it enables several firmware errors of leica at40x instruments to be eliminated. this advantage was one of the reasons why a few at40x university users have been interested in software and maybe others will be in the future. for that reason and for the convenience of the readers, the errors will be shortly summarized further in the text, for more details see the author´s article system software testing of laser tracker leica at401 [5]. 4.1. the refractive model from leica, a document describing the computation procedure of the group refractive index of air was obtained on 5th may 2013 [8]. according to this paper and also practical testing, at401 uses equations derived from edlén´s formula by defaulte. it can reach up to a 0.5 ppm difference from the ciddor & hill procedure (1996 [1], 1999 [2]) recommended by the resolution of the iag (international geodetic association) in 1999 in birmingham [12]. even if these default formulas can be overruled by a user-programmed procedure, it is not very probable that many programmers would do so. atcontrol enables the employment of 11 different procedures for calculating the refractive index: barrell & sears (1939), iugg (1963), edlén (1966), owens (1967), peck & reeder geoinformatics fce ctu 14(2), 2015 13 f. dvořáček: atcontrol software for leica at40x laser trackers (1972), birch & downs (1994), ciddor (1996), ciddor & hill (1999), iag (1999), ciddor (2002), leica at40x (2013). all equations are derived from primary data sources – authors´ original papers [4]. a co2 content in air can be inserted (default 400 ppm) and it is taken into account in all possible formulas. 4.2. the wavelength of the adm it can be found in the paper from leica [8] that at401 operates on 795 nm wavelength. in the instrument’s manual 780 nm is declared [10]. it shows that at401 physically operates with 795 nm wavelength laser beam but the instruments perform the computations with the wrong wavelength of 780 nm. therefore, a systematic distance-independent error of about 0.3 ppm in the refractive index of air is present. atcontrol uses 795 nm or the user-inserted value of wavelength in all computations when wms (weather monitor status) is notconnected or readonly. when it is in readandcalculaterefractions mode, there is no possibility to fix the error because all computations are left to be performed by the emscon server. 4.3. improper updates of the group refractive index of air the issue of improper updates of the refractive indices is a bit more complicated to discover and also to fix. by some unexplained reason at401 tracker neglects changes in refractive index of air up to 0.5 ppm. even if new atmospheric parameters are obtained from atc400 (at controller 400) meteostation or if a newly given user-computed refractive index is inserted, emscon server does not update the value in its internal memory. this leads to the fact that the measured length to be very often corrected with an outdated value of the refractive index, sometimes even several hours old. the trick of fixing this error consists in applying a wrong refractive index which differs from the right one at least by 0.5 ppm. setting of the right value follows immediately. by doing so, atcontrol always ensures that the correct refractive index is stored in the emscon server memory before a new measurement starts. every distance is corrected by the refractive index of air by using atmospheric parameters (temperature, atmospheric pressure, relative humidity and eventually co2 content) which are at max. 20 s out of date ( = emscon server refresh rate). this does not apply to the automatic readandcalculaterefractions mode of wms because the trick with setting the wrong refractive index cannot be used. by fixing this error and by setting a user´s refraction procedure, a slight numeric inconsistency in the reading and saving of atmospheric parameters can occur. the measurement routine is such as: 1) set wrong refractive index, 2) obtain atmospheric parameters, 3) compute refractive index of air 4), set refractive index, 5) start measurement. if a new automatic atmospheric sensor reading took place (every 20 s) in a short period of time between steps 2 and 5, emscon server returns the atmospheric parameters which were not used for calculating the refractive index of air. for that reason, used_t, used_p and used_rh values are also saved to the output .txt file. geoinformatics fce ctu 14(2), 2015 14 f. dvořáček: atcontrol software for leica at40x laser trackers table 1: impacts of at40x firmware errors [ppm] error max. error estimated common error laboratory outdoor min max min max min max refractive model 0.01 0.64 0.07 0.12 0.01 0.32 wavelength -0.31 -0.27 -0.28 -0.28 -0.28 -0.27 updating n -0.50 0.50 -0.25 0.25 -0.50 0.50 all together -0.80 0.87 -0.46 0.09 -0.77 0.55 table 2: impacts of at40x firmware errors [µm] error max. error 160 m estimated common error laboratory 30 m outdoor 160 m min max min max min max refractive model 2 102 2 4 2 51 wavelength -50 43 -8 -8 -45 -43 updating n -80 80 -8 -8 -80 80 all together -128 139 -14 3 -123 88 4.4. insufficient resolution of temperature readings even if the resolution of reading of air/object temperature would be desirable to 0.01 °c, it is not usually available. it seems that the interface of the atc400 is designed for returning only 3 valid digits in case of temperature readings. e.g. obtaining 9.99 °c is possible, but 10.01 °c is not (10.0 °c only). atcontrol mediate reading of temperature at the maximum resolution which is obtainable from emscon server but it cannot fully fix the issue without intervention to the instrument´s firmware. to demonstrate that the discovered errors are significant and should be taken into account by all current and potential users, a summary has been made. the purpose of the tables below (table 4, table 5) is to show how the errors may effect measuring with at40x in ordinary conditions – both in laboratory and outdoor. the impacts of errors depend on ambient atmospheric conditions and its gradients as well as on the length to the target point. both extremes (min, max) of these error intervals are evaluated in the tables. the max. error is derived as the maximum possible influence for the whole instrument´s working range (<0; 160> m distance, <0; 40> °c temperature, <500; 1100> hpa atmospheric pressure, <0; 95> % relative humidity). in laboratory conditions, stability of temperature ±0.25 °c at 20 °c and 30 m length is assumed. a reduced range of temperature <0; 30> °c is used for the outdoor evaluation. notice that an error in the group refractive index of air causes an error of about the same amount in the measured distance (km). each of the described issues itself potentially exceeds the manufacturer´s specification about the accuracy of the distance measurement (5 µm) [9, 4]. 5. key functions of atcontrol in this section of the paper, key functions of atcontrol are introduced. geoinformatics fce ctu 14(2), 2015 15 f. dvořáček: atcontrol software for leica at40x laser trackers lan, wi-fi and user-defined tcp/ip address connection is possible. all wms modes (notconnected = off, readonly, readandcalculaterefractions = calculate) and all measurement modes (standard, precise, fast, outdoor) are integrated. all leica´s defined coordinate systems (4x rectangular, 2x cylindrical, 2x spherical) are incorporated. along with original unit system (m, °c), the secondary imperial unit system (yd, f) has been recently added. atcontrol always raises the gui with settings from the last properly closed instance of the application. predefined settings for the baseline measurement, a possibility to restore default settings and to save and load user´s settings is programmed. reflector and compensation file lists can be loaded and the chosen settings saved. the overview camera (ovc) can be connected and a live video stream transmitted to a pc in real time. the frame rate per second and the image parameters (contrast, brightness, saturation) can be set. by clicking into a point in the image, followed by the move button, the sensor automatically rotates to aim to the specified point. laboratory laser-interferometer renishaw ml10 and step motor microcon m1486 can be controlled in order to allow simultaneous measurement by the tracker and the interferometer. the precise step movement of the interferometer carriage can be set. the lengths measured by the interferometer are stored into pcode column. robot mode in the manual setting is able to “remember” positions of last observed points. by switching to automatic a sequence of previously measured points can be launched with required repetitions, face mode and breaks among both individual measurements and measurement sets. this enables an automatic monitoring measurement without a user´s intervention. the sensor can be moved by clicking into the ovc image or by using multiple buttons and sliders. the last measured point can be aimed or the sensor can be orientated according to a given horizontal and vertical angle. a reflector can be located by using the find reflector function with the appropriately chosen parameters (radius, time-out, distance). stable probing allows a repeated stationary point measurement without a pc operator interaction. when a reflector is placed still within a specified angle tolerance and a time period, the new measurement initiates automatically. the interferometer refraction function enables to compute the phase refractive index of air which can be manually inserted into a controlling software of a laboratory interferometer. this ensures that both the tracker and the laser interferometer use the same principle of the atmospheric correction of measured distances. almemo data join provides the possibility to combine externally collected atmospheric data with at40x data. in this case specifically, the ahlborn almemo data-logger (e.g. 2590-4s) exported data in .xls format can be added to a .txt data file from atcontrol. the most important step is to link tracker´s data with time relevant atmospheric data of the external data-logger. the maximum acceptable time difference for combining the parameters has to be stated. at the end, corrected distances reflecting all appropriate atmospheric data are computed and stored. if required, measured data and error reports are save to the .txt file. data are grouped into columns for easy further processing in any spreadsheet editor. all possible data obtained from the atc400 under extended statistical format are stored (hz, v, sd, std_hz, std_v, geoinformatics fce ctu 14(2), 2015 16 f. dvořáček: atcontrol software for leica at40x laser trackers figure 3: measurement graphs differences (rotation of a centring device leica gzr3) std_sd, std_total, covar_12, covar_13, covar_23, pointingerror_1, pointingerror_2, pointingerror_3, aprstd_hz, aprstd_v, aprstd_sd, aprstd_total, aprcovar_12, aprcovar_13, aprcovar_23, t, p, rh, trymode). furthermore, more information is included (time, #no., station, target, stationheight, targetheight, used_t, used_p, used_rh, used_co2, refindex, hd, vd, face, refprocedure, weathermonstatus, measmode, reflector, compensation, coordinatesystem, pcode). for surveying, several handy measurement settings are included, e.g. measurement repetitions, relative and absolute pause between measurements. the relative pause is an interval between the end of the last measurement and the start of the next one, the absolute pause is the interval between two subsequent starts of measurement. to save sensor´s batteries and time of measurement during the two face mode (1, 2, 1, 2), also the 50:50 (1, 1, 2, 2) and the irregular change (1, 2, 2, 1) of faces can be set. if a measurement failed and the add checkbox is ticked on, atcontrol adds new measurements as long as a specified number of successful repetition is completed. the measurement progress bar along with estimated time remaining to the end is displayed during the measurement process. measurement sequence can be terminated after each single measurement by clicking the cancel button. when measuring in a rectangular cs (coordinate system), the default cs with [0, 0, 0] station coordinates and a random orientation is set by the laser tracker. in many surveying geoinformatics fce ctu 14(2), 2015 17 f. dvořáček: atcontrol software for leica at40x laser trackers tasks, a specific measuring cs is required. to be able to perform instant work in that system, a transformation is necessary. this functionality is built inside atcontrol. nominal transformation point coordinates and standard deviations are loaded from a file and actual transformation points are instantly measured. the scale can be fixed to 1 or left unknown during transformation. both nominal and actual points can be weighted during computation. a protocol of the transformation´s result is created and can be stored as a separate file. marking out points is made easier by loading a file with point coordinates. if the mark out checkbox is ticked on and a measured target point number match the point number in the loaded file, coordinate differences are displayed when a measurement is finished. multiple graphs of different observed quantities and computed values can be created according to the user´s choice. graph data are then regularly updated after each successful measurement. absolute values or differences from the first measurement can be displayed. it is useful for monitoring purposes, e.g. distance differences and temperature changes can be observed and a suspicious measurement result is instantly detected. figure 4: 3d graphical sketch (rotation of a centring device leica gzr3) software for industrial metrology has advanced graphical cad style interface. this is not the goal for atcontrol but a graphical sketch of measured points could be needed if many points are measured on an object. for that reason, a 3d sketch of points is build-in. xyz coordinates are directly obtained from the measurement result or they are calculated if not a rectangular measurement system is being used. it is possible to automatically draw a line between observed points. additional manual drawing into the graph is possible. the graph is interactive and therefore the point cloud can be viewed from any desirable position, including a quick 2d view of xy, xz and yz planes. geoinformatics fce ctu 14(2), 2015 18 f. dvořáček: atcontrol software for leica at40x laser trackers 6. conclusion atcontrol is a user-programmed application designed to control measuring with leica at40x trackers. the main purpose of use of software is surveying in general, length metrology and the laboratory testing. some specific features have been developed mostly for author´s needs, e.g. a possibility of combining external meteorological data from ahlborn almemo data-loggers or controlling a laboratory laser interferometer renishaw ml10. it is possible to incorporate any similar add-on in order to create a highly automated measurement system. by correcting several system software errors, atcontrol ensures more control over metrological traceability of distance measurements. 11 well-known procedures for computing refractive index of air are included. automated savings of all values returned by the tracker along with other computed values and description data is granted. references [1] p. e. ciddor. “refractive index of air. new equations for the visible and near infrared”. in: applied optics 35 (1996), pp. 1566–1572. [2] p. e. ciddor and r. j. hill. “refractive index of air. 2. group index”. in: applied optics 38 (1999), pp. 1663–1667. [3] filip dvořáček. atcontrol. online. application to control measuring with leica at40x trackers. url: http://k154.fsv.cvut.cz/~dvoracek/software.html. [4] filip dvořáček. “nepřímé určení indexu lomu vzduchu pro výpočet fyzikální redukce elektronických dálkoměrů”. in: geodetický a kartografický obzor 59(101).10 (2013), pp. 253– 266. url: http://archivnimapy.cuzk.cz/zemvest/cisla/rok201310.pdf. [5] filip dvořáček. “system software testing of laser tracker leica at401”. in: geoinformatics fce ctu 13 (dec. 2014), pp. 49–57. doi: 10.14311/gi.13.6. [6] pavel hánek. geotracker. 2011. url: http : / / www . vugtk . cz / odd25 / kostice / geotracker.html. [7] leica geosystems. emscon 3.8. leica geosystems laser tracker programming interface programmers manual. online. switzerland, 2013. [8] leica geosystems. formula for calculating the refractive index of ambient air used for the leica at401 of hexagon metrology. 2013. [9] leica geosystems. leica absolute tracker at401. online. switzerland, 2010. [10] leica geosystems. leica at401 user manual v. 2.0. 2013. [11] new river kinematics. download. 2014. url: http://www.kinematics.com/download/ index.php. [12] j. m. rueger. refractive indices of light, infrared and radio waves in the atmosphere. university of new south wales, 2001. isbn: 9780733418655. geoinformatics fce ctu 14(2), 2015 19 http://k154.fsv.cvut.cz/~dvoracek/software.html http://archivnimapy.cuzk.cz/zemvest/cisla/rok201310.pdf http://dx.doi.org/10.14311/gi.13.6 http://www.vugtk.cz/odd25/kostice/geotracker.html http://www.vugtk.cz/odd25/kostice/geotracker.html http://metrology.leica-geosystems.com/common/shared/downloads/inc/downloader.asp?id_0=20453&submit_0=download&id_1=19781 http://metrology.leica-geosystems.com/downloads123/m1/metrology/at401/brochures-datasheet/leica%20absolute%20tracker%20at401%20factsheet_en.pdf http://www.kinematics.com/download/index.php http://www.kinematics.com/download/index.php geoinformatics fce ctu 14(2), 2015 20 introducing the new grass module g.infer for data-driven rule-based applications peter löwe helmholtz centre potsdam gfz german research centre for geosciences ploewe@gfz-potsdam.de abstract this paper introduces the new grass gis add-on module g.infer. the module enables rule-based analysis and workflow management in grass gis, via data-driven inference processes based on the expert system shell clips. the paper discusses the theoretical and developmental background that will help prepare the reader to use the module for knowledge engineering applications. in addition, potential application scenarios are sketched out, ranging from the rule-driven formulation of nontrivial gis-classification tasks and gis workflows to ontology management and intelligent software agents. keywords: module g.infer, grass, data-driven rule-based application 1. introduction maps are used to represent the world surrounding us. they are put into use as tools to categorize, classify and judge our environments, to make decisions and act accordingly. in more general terms, the science of mapmaking, cartography, is to provide usable and understandable spatial information for a section of space for decision support. the motivation for computer driven cartography, mostly shouldered by geographic information systems (gis) such as grass gis, is to perform the overall tasks of mapmaking as a workflow, including means to apply the human expertise and know-how required to infer decision-support for human actions. in gis, the development of such „map-making“ workflows is usually handled by stepwise execution of the consecutive processing steps by a human operator, to create and document the unfolding workflow, by interacting with the actual spatial data. once a mapping workflow has been laid out, the next step is its automatisation, turning it into software. this can involve scripting, i.e. the definition of an execution-chain of available gis modules, or programming, which includes the development of new gis modules. free and open source gis like grass gis allow rapid development of both solutions as the overall codebase can be exploited, so there is no need to reinvent previously developed functionalities because of copyright infringement issues. however, if a mapping workflow can be formulated by the human gis operator, but can not be implemented as script or gis module, there's a problem. in this case, the task at hand is basically solveable, but the available software environment lacks the flexibility to accommodate the workflow within acceptable time and effort constraints. this situation occurs frequently for classification tasks (remote sensing data or similar fields), resulting in the use of suboptimal classification algorithms: the implemented solution is not geoinformatics fce ctu 8, 2012 17 löwe, p.: introducing the new grass module g.infer oriented on the original task solving strategy, but is limited by available software tools and programming skills. a similar field is the flexible set up of gis workflows which needs to adapt the processing chain according to changing constraints such as the availability and quality of data input, again within acceptable time and effort constraints. what is needed in these two scenarios, both for classification and gis-workflow execution, is a way to encode understood, yet hard to formulate, „rules of thumb“, acquired from human domain experts. a tool based on such rules will excel in scenarios where the following tell-tale indicators exist [28] • classification tasks which may not appear demanding, but no robust way for building a solution can be defined in acceptable time and effort. • simple workflows, where the processing rules keep changing depending on the available input and other parameters. • problems which have not fully understood or are very complex to solve. in such cases, a rule-based approach, as provided by the new grass module g.infer becomes advantageous: • rule-based modelling allows to focus on "what to do" instead of "how to do it". • rules allow to express solutions to complex problems and to verify them consequently by logging the decision steps leading to a particular solution. • separation of logic (know-how) and data allows to keep the know-how to be stored in centralized rule-bases, providing a central point of access for further editing and improvement. • rules can also be human-readable, doubling as their own documentation and to be reviewable by domain experts. while grass gis provides a wide range of classification tools for raster, vector and volume data, a flexible yet generic approach tailored to conveniently express such rules, applicable to all gis data types, is currently lacking. grass 4.x and grass 5.x featured the r.infer module, which provided basic rule-based analysis capabilities for raster data [20][23]. the module remains to be ported into grass6 and grass7. the same holds true for the r.binfer module , which uses an inference engine based on bayesian statistics (making decisions based on past experience) to assist human experts in a field develop computerized expert systems for land use planning and management, basing bases the probable impacts of a future land use action on the conditional probabilities about the impact of similar past actions [1][2]. 2. artificial intelligence artificial intelligence (a.i.) is the field of computer science which focuses on the processes of human thinking an their implementation in software. a.i. is divided in various disciplines such as artificial life, software agents, neural networks, genetic algorithms, decision trees, frame systems and expert systems [6][25]. several grass gis modules, including the addon modules r.fuzzy.* [13][14], r.agent[22], magical [16][17][18], ann.*[24], have been developed to solve tasks related to a.i. disciplines. geoinformatics fce ctu 8, 2012 18 löwe, p.: introducing the new grass module g.infer knowledge representation and classification is the a.i. discipline focusing on how human knowledge for problem solving can be represented, manipulated and preserved. so called knowledge-based systems, also known as expert systems, facilitate the encoding of knowledge for automated reasoning or inference, i.e., the processing of data to infer conclusions, which can be mapped out in a gis. the overall process of making human expertise available through an expert system is called knowledge engineering. a rule engine implementing an expert system instance for a specific knowledge-domain is called a production rule system. the term "production rule" stems from formal grammars where it is described as an abstract structure that describes a formal language precisely [15]. such a production rule system is the core of g.infer. 3. the c language integrated production system the c language integrated production system (clips) is a production rule system toolkit for building expert systems. the project was started by nasa (johnson space center) in 1985, where it was maintained until the 1990s. it is currently hosted at sourceforge and is provided under a public domain licence. the name is an acronym for "c language integrated production system“, succeeding the original name „nasa's ai language (nail)“[26]. clips is written in c and provides a rule-based data-driven programming language. it resembles in syntax and user interface closely the language lisp [10]. clips traces its origins to inference's art which in turn stems from ops5 [27]. the clips language continued to evolve and includes today paradigms for rule-based, procedural, functional and object-oriented programming. since 1991, clips includes the clips object oriented language (cool) for object-oriented development. in the recent clips version 6.3, rules can be triggered via objects enabling object oriented modelling to drive the inference process. rules in clips can be looseor close-coupled: in the latter case, the activation („firing“) of a rule explicitly invokes the firing of other rules. this is also referred to as a categoric problem solving approach [26]. on the other hand, loose coupling is achieved by a rule-base manipulating sets of variables or non-ordered facts, with independent rules pattern-matching on these, firing only if certain value ranges are met. this considered a heuristic classification approach [26]. clips provides several approaches to deal with situations when multiple rules will be activated simultaneously and a prioritisation is needed. all rules can be provided with a integer salience value, allowing the rule with the highest salience value to fire first. alternatively, a clips knowledge-base can be partitioned into thematic modules. the modules are put in a sequence on an execution stack, allowing all rules from the top-most module to fire. once all rules from this module have been evaluated, processing moves on to the following module [7][8][9]. the rise of the java programming language led to implementations of languages similar to clips in projects such as jess, drools and jrules, adopting a similar syntax [2][4]. this family of rule-based and data-driven tools still shares the same basic syntax for the definition of rules to encode human knowledge. while the individual features and capabilities have diverged, it is still possible to port an application if a restricted subset of features is used to write portable programs. as a side effect, a wealth of documentation can be used from these clips-related projects such as g.infer [2][4][6][25]. geoinformatics fce ctu 8, 2012 19 löwe, p.: introducing the new grass module g.infer 4. rete the core of the production rule system clips is an inference engine that is able to handle a large number of rules and facts using the rete algorithm for forward-chaining inference. the word „rete“ is latin, meaning „net“ or „comb“. the rete algorithm was designed by dr charles l. forgie [3]. rete has become the basis for many popular rule engines and expert system shells apart from clips. it provides a generalized logical description of an implementation of functionality, which is responsible for matching data (facts) against rules (production) in a pattern-matching production system. a production (rule) consists of one or more conditions and a set of actions which may be undertaken for each complete set of facts that match the conditions. 4.1. pattern-matching performance the efficiency of the rete pattern-matching algorithm is based on the assumption that data changes slowly over time. this assumption will fail for applications where rapid data change can occur as it is often the case in gis. because of this, g.infer should not be perceived as a replacement of optimized grass tools such as r.mapcalc, which manipulate each cell of a raster layer [8]. however, in many cases data pre-processing will allow to comply with the rete assumption. such approaches include the grouping of data into larger sets, limiting rule-based intervals to elements which transition into another set, or to convert numeric value ranges into symbolic values like “unavailable, nominal, critical”, retracting the current fact and assert a new one only if the symbolic value has changed. for g.infer, such pre-processing can be achieved by grass modules such as r.clump, r.mask, r.reclass or r.recode. 5. embedding clips in grass clips and grass are based on the c language. until now, no close-coupling below the api based on the c api has been published. of the numerous software projects which embed clips in other programming languages, two are currently known to have been used to connect clips and grass gis. the clips and perl with extensions (cape) project was developed in the late 1990s [11]. cape closely integrates clips and the procedural programming language perl, and provides extensions to facilitate building systems with an intimate mixture of the two [12]. the grasscape libraries were a merger of cape and grass5 to provide rule based programming in a grass environment [19]. this was used to assess the validity of radar meteorological data products, to generate warning messages for the general public for storm events and to create rainfall intensity maps for soil erosion studies. the development of grasscape was stopped in 2003. pyclips is an extension module for the python language, interfacing it with clips [5]. python has become increasingly popular for scripting in grass gis, and was selected as the reference language for grass 7.0 extensions: a reimplementation and extension of grasscape based on pyclips was started in 2011. this development eventually resulted in the grass module g.infer. geoinformatics fce ctu 8, 2012 20 löwe, p.: introducing the new grass module g.infer 5.1. the grass gis module g.infer g.infer is a grass add-on module for grass6.4.x and grass 7.0. it allows one to define and execute rule-based data-driven processing based on grass data layers. currently, raster-, volumeand point vector-layers can be processed with g.infer. in addition, grass environment variables, including the grass location region settings can be queried and manipulated within the g.infer production system. access to various parameters of the embedded clips instance is provided by options and flags accessible both from the command line interface or a grass graphical user interface (gui). in addition to automated inference runs, an interactive mode allows to interact with the rule-base environment on the fly. the g.infer module further allows access to grass modules and their output. this makes the set-up of rule-driven gis workflows possible. the development of g.infer is currently supported by gisix.com to develop sample applications and create performance metrics. the module will be released as a grass add-on in late 2012. gis-based inference workflow this section describes the tasks executed by g.infer to allow the processing of gis-layers by rule-based inference in the clips environment. the involved tasks are best described when using both grassand clipscentered perspectives. grass gis-centered workflow from the perspective of a grass gis user, g.infer provides access to a production rule system to set-up and maintain specific rule-based datadriven tasks in grass as an expert system. the following steps are required to implement such an expert system and to conduct a rule-based analysis of spatial data layers: in the preparation phase, the goal of the analysis must be defined. the knowledge and expertise needed to reach this goal will be written down as plain language rules (knowledge engineering). this will likely require the involvement of human domain experts. the plain language rules are in a next step translated into clips rule syntax to be stored in a rule-base file. in this step, the names of the grass layers to be queried must be used in the clips rules. the standard naming convention for how to address grass layers in clips rules is provided in the g.infer html documentation [21]. in the application phase, g.infer is invoked with the required parameters: the provided grass layers and the rulebase-file are ingested. the user can opt to have elements of the ruleor fact-base printed out, or to interact with the inference environment via a command line prompt. now the inference run will commence unless the early abort flag has not been set. a successful inference run leads to the update of selected gis data layers, and optionally the creation of new vector layers and log files. production system-centered workflow from the perspective of the clips production system, it is operating in g.infer in an embedded environment, which provides inferencerelated parameters and data. the overall clips-based process begins by the setting of inference engine-related parameters and the definition of fact-templates for each gis layer to be used. in turn, the content of the gis layers is copied into facts for the inference process, and the rule-base is set up from the provided rule-file. at this stage interactive access via a clips prompt can allow one to list and manipulate the current content. if no abort signal is given by the user from the grass layer, the inference process starts. geoinformatics fce ctu 8, 2012 21 löwe, p.: introducing the new grass module g.infer the existing rules are pattern-matched against the existing facts to start the firing of the first rules, resulting in modifications and extensions of the fact-base. this can lead to renewed firing of rules, starting an iterative process. once the firing of rules ceases, the inference process ends. for the rule-based system, the final updating of gis layers from parts of the fact-base is transparent. figure 1: interaction of the g.infer module with related grass gis components and the clips production rule system. human experts (shown as figurines) can interact with this work environment independently on multiple levels. knowledge engineering and management g.infer provides multiple levels of knowledge modelling and rule-programming approaches, allowing one to use increasingly complex techniques when required. the options range from closely-coupled to loosely-coupled inference rules, extensions of the clips language by functions, rulebase stratification by salience values or its partitioning into modules, up to ontologies and object oriented modelling using the cool language. depending on the techniques used for knowledge engineering, the evaluation and testing of a rule base can become a complex task. multiple factors are to be considered, ranging from simple typos to content issues of input data layers and nontrivial rule precedence issues. this requires options for stepwise interactive execution and checking the inference process, the capability to log specific processes for later analysis and to save all or parts of the rule base or the object instances. such features exist in the clips environment and can be accessed by gis users as grass module flags and -options in g.infer. geoinformatics fce ctu 8, 2012 22 löwe, p.: introducing the new grass module g.infer 6. business processes in grass gis software projects such a jess and drools have been used in the past years to apply the rulebased data-driven approach to new tasks, beyond the classical field of classification topics [2][4]. they are used to set up business-themed systems, in a vertical stack of tasks, including knowledge engineering, management and deployment of rules, collaboration between rulebased systems, analysis and end user tools. software systems to handle such task stacks are called business rule management systems. the emerging methodology of describing the application of rule-based systems in enterprise environments for structured, product-generating activities has been named the business rules approach [2][15]. this is also of interest for gis application and the related workflow perspective: grass gis modules can be used in a similar fashion to set-up data-processing workflows, while on a higher level, grass gis-based workflows can be fully integrated into greater workflow-chains. inference processes in g.infer can trigger the modifications of its fact-base, but they can also be used to execute further grass modules or scripted grass-workflows. by doing this, it is possible for an inference process to have new data imported into grass, have it processed and have the outcome to extend its fact-base. further, direct queries to the user requesting interactive input can be triggered by rules. this allows one to effectively have a data-processing workflow in grass gis controlled by an inference process within g.infer. the overall process is illustrated in figure 2. the topic of interaction between cool object instances and the rule-base will be covered in a later publication. figure 2: overview of the interactions between the grass gis environment (green), the clips-based inference environment (blue) and external data sources (grey), highlighting (red) the potential for workflow control to be exerted by the inference process. geoinformatics fce ctu 8, 2012 23 löwe, p.: introducing the new grass module g.infer 7. application scenarios the forerunner modules of g.infer, r.[b]infer and grasscape have already enabled rulebased inference for grass gis, yet did not succeed to attract a large audience of gis application developers in the long term. as a consequence, their porting to the current versions of grass 6.x and grass 7 did not occur for lack of serious need. the same challenge applies for g.infer to give today's community of grass application developers significant added value in their work. for this reason, a cookbook-type publication will follow up this introductory article, providing hands on application examples with respective rule bases to lower the learning curve, and to show inference-based applications on varying levels of complexity. as g.infer connects the domains of gis, workflows, classification, and rule-based systems, the module can be applied for at least three different application scenarios in general: 1. gis-based classification tasks, where g.infer can be used to quickly set up and apply rule-based data-driven classification of varying levels of complexity. this includes the combination of queries on information from both rasterand vector-layers and derived facts (e.g.: „if a location is both an archeological site [vector information] and an abandoned mine [raster information] then assert geoarcheological monument“). classification rules can be flexibly chained within the rulebase: „if a location is an abandoned mine then assert no-trespassing“; „if the location is a monument and today-is-national monument day then assert guided-tours-available“ 2. workflows and business processes, where gis-based data processing chains need to adapt to changing data quality, human user input or time constraints. this allows to use rules similar to the exception-construct known from other programming languages, such as python: if “the current grass region is smaller than threshold x” then “assert no-use-satellite-images; use-aerial-photography”. 3. extensions for the underlying production system: in this case, the perception is reversed. from the standpoint of the embedded production system, the g.infer-provided interface to grass gis is merely an extension for advanced a.i.-focused tasks such as ontology modelling or intelligent software agents: while g.infer rules define knowledge about cause-effect actions among certain entities (facts), an ontology is a structural framework defining the object classes of the entities for the current knowledge domain and their properties and relations (taxonomy). within g.infer, a simple implicit ontology for the grass gis domain is used, to translate grass layers in corresponding domain objects which are defined via clips templates. so the clips templates and their interrelation define the ontology. any g.infer application can set-up additional ontologies for their specific knowledge domain. another field for exploration are intelligent rulebased software agents implemented in g.infer. such autonomous entities are capable to observe and manipulate their surroundings while trying to achieve goals, effectively interacting with the gis layers by manipulating the corresponding spatially-enabled facts in g.infer. they can be distributed in gis-geographical space if they posses spatial locations (e.g. virtual sensor networks) and can become mobile if they have also the means to change their current position within the grass location. another option are multi-agent systems (mas), which are able to communicate to achieve a common goal. geoinformatics fce ctu 8, 2012 24 löwe, p.: introducing the new grass module g.infer 8. conclusion the new add-on module g.infer re-introduces generic rule-based data-driven modelling to grass gis for the current versions of grass 6.4.x and grass 7. it provides a new flexible interface between grass gis-based geoscientific modelling and artificial intelligence (a.i.) research based on the c-language integrated production system (clips) toolkit. this allows to develop rule-based data-driven processing of grass gis data sources by expert systems encoded as clips knowledge bases. the description of the theoretical and developmental background of the g.infer module already brings up possible application scenarios, ranging from the rule-driven formulation of hard to describe gis-classification tasks, the flexible set-up and management of gis workflows to artificial intelligence-focused topics such as the ontology management, defining taxonomies of knowledge domains, and the exploration of intelligent software agents encoded as g.infer rulebases. detailed examples of the practical application of g.infer for a range of real world problems will be provided in a follow up publication. references [1] buehler k. (1990). a gis providing grounds for water resources research http://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=1188&context= watertech [2] buehler k. (1999). r.binfer documentation https://svn.osgeo.org/grass/grass/branches/releasebranch_5_5/html/html/r. binfer.html [3] browne p. (2009) jboss drools business rules. packt publishing. isbn 1-847-19606-3 [4] forgy c., (1982) rete: a fast algorithm for the many pattern/many object pattern match problem, artificial intelligence, 19 [5] friedman-hill e. (2003). jess in action. manning publications. isbn 1-930-11089-8 [6] garosi f. (2008). pyclips manual release 1.0. http://sourceforge.net/projects/pyclips/files/pyclips/pyclips-1.0/ pyclips-1.0.7.348.pdf/download [7] giarratano j., gary r. (2004). expert systems: principles and programming. course technology. isbn 0-534-38447-1 [8] giarratano, j.c. (2007). clips user's guide. http://clipsrules.sourceforge.net/documentation/v630/ug.pdf [9] giarratano, j.c. (2007). clips reference manual: basic programming guide. http://clipsrules.sourceforge.net/documentation/v630/bpg.pdf geoinformatics fce ctu 8, 2012 25 http://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=1188&context=watertech http://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=1188&context=watertech https://svn.osgeo.org/grass/grass/branches/releasebranch_5_5/html/html/r.binfer.html https://svn.osgeo.org/grass/grass/branches/releasebranch_5_5/html/html/r.binfer.html http://sourceforge.net/projects/pyclips/files/pyclips/pyclips-1.0/pyclips-1.0.7.348.pdf/download http://sourceforge.net/projects/pyclips/files/pyclips/pyclips-1.0/pyclips-1.0.7.348.pdf/download http://clipsrules.sourceforge.net/documentation/v630/ug.pdf http://clipsrules.sourceforge.net/documentation/v630/bpg.pdf löwe, p.: introducing the new grass module g.infer [10] giarratano, j.c. (2008). clips reference manual: advanced programming guide. http://clipsrules.sourceforge.net/documentation/v630/apg.pdf [11] graham p. (1995). ansi common lisp. prentice hall, isbn 0-133-79875-6 [12] inder r. (1998) cape users manual, etltechnical report etl-tr98-3, electrotechnical laboratory, tsukuba, japan. [13] inder r. (2000) cape: extending clips for the internet, knowledge-based systems 13 (2000), elsevier [14] jasiewicz j. (2011): r.fuzzy grass addons repository http://trac.osgeo.org/grass/browser/grass-addons/grass6/raster/r.fuzzy [15] jasiewicz j., di leo m. (2012): application of grass fuzzy inference system in flood prone areas prediction http://geoinformatics.fsv.cvut.cz/gwiki/application_of_grass_fuzzy_ inference_system_in_flood_prone_areas_prediction [16] jboss community documentation(2008) the rule engine http://docs.jboss.org/drools/release/5.4.0.cr1/drools-expert-docs/html/ ch01.html [17] lake m. w. (2000). magical computer simulation of mesolithic foraging. in kohler, t. a. and gumerman, g. j., editors, dynamics in human and primate societies: agentbased modelling of social and spatial processes. oxford university press, new york. [18] lake m. w. (2000) magical computer simulation of mesolithic foraging on islay. in mithen, s. j., editor, hunter-gatherer landscape archaeology: the southern hebrides mesolithic project, 1988-98, volume 2: archaeological fieldwork on colonsay, computer modelling, experimental archaeology, and final interpretations. the mcdonald institute for archaeological research, cambridge. [19] lake m. w. (2002) magical for grass4.x http://www.ucl.ac.uk/~tcrnmar/simulation/magical/manual/index.html [20] löwe p. (2004). technical note a spatial decision support system for radarmetereology in south africa. transactions in gis. 8(2), blackwell publishing ltd, oxford. [21] löwe p. (2005). knowledge management and grass gis: r.infer, grass-newsletter 01/2005, issn 1614-8746 [22] löwe p. (2012) g.infer documentation (2012): http://grasslab.gisix.com/scripts/g.infer/g.infer.html [23] lustenberger m (2012) r.agent grass addons repository http://trac.osgeo.org/grass/browser/grass-addons/grass7/raster/r.agent geoinformatics fce ctu 8, 2012 26 http://clipsrules.sourceforge.net/documentation/v630/apg.pdf http://trac.osgeo.org/grass/browser/grass-addons/grass6/raster/r.fuzzy http://geoinformatics.fsv.cvut.cz/gwiki/application_of_grass_fuzzy_inference_system_in_flood_prone_areas_prediction http://geoinformatics.fsv.cvut.cz/gwiki/application_of_grass_fuzzy_inference_system_in_flood_prone_areas_prediction http://docs.jboss.org/drools/release/5.4.0.cr1/drools-expert-docs/html/ch01.html http://docs.jboss.org/drools/release/5.4.0.cr1/drools-expert-docs/html/ch01.html http://www.ucl.ac.uk/~tcrnmar/simulation/magical/manual/index.html http://grasslab.gisix.com/scripts/g.infer/g.infer.html http://trac.osgeo.org/grass/browser/grass-addons/grass7/raster/r.agent löwe, p.: introducing the new grass module g.infer [24] martin m., westervelt j. (1991).grass4.0 inference engine: r.infer http://grass.osgeo.org/gdp/raster/infer.ps.gz [25] netzel p (2011) implementation of ann in grass – an example of using ann for spatial interpolation http://www.wgug.org/images/stories/materialy/20110519praga-ann.pdf [26] jackson p. (1998). introduction to expert systems. addison wesley. isbn 0-201-87686-8 [27] puppe f. (1993). systematic introduction to expert systems. springer. isbn 3-54056255-9 [28] riley g., (2008). the history of clips. http://clipsrules.sourceforge.net/whatisclips.html#history [29] rudolph g. (2008). some guidelines for deciding whether to use a rule engine. http://www.jessrules.com/guidelines.shtml geoinformatics fce ctu 8, 2012 27 http://grass.osgeo.org/gdp/raster/infer.ps.gz http://www.wgug.org/images/stories/materialy/20110519praga-ann.pdf http://clipsrules.sourceforge.net/whatisclips.html#history http://www.jessrules.com/guidelines.shtml geoinformatics fce ctu 8, 2012 28 implementation of sqlite database support in program gama-local vaclav petras department of mapping and cartography faculty of civil engineering, czech technical university in prague keywords: gnu gama, adjustment of geodetic networks, programming, c, c++, databases, sqlite, callback functions abstract the program gama-local is a part of gnu gama project and allows adjustment of local geodetic networks. before realization of this project the program gama-local supported only xml as an input. i designed and implemented support for the sqlite database and thanks to this extension gama-local can read input data from the sqlite database. this article is focused on the specifics of the use of callback functions in c++ using the native sqlite c/c++ application programming interface. the article provides solution to safe calling of callback functions written in c++. callback functions are called from c library and c library itself is used by c++ program. provided solution combines several programing techniques which are described in detail, so this article can serve as a cookbook even for beginner programmers. this project was accomplished within my bachelor thesis. introduction gnu gama is a library and set of programs for adjustment of geodetic networks. project is licensed under the gnu gpl and is written in c++ programming language. its main author and developer is professor aleš čepek [1] but it has many other contributors. for numerical solutions of least squares adjustment several numerical algorithms (e.g. singular value decomposition and gram-schmidt orthogonalization) can be used in gnu gama [2]. program gama-local allows adjustment of local-geodetic networks. my work was to implement the support of reading input data from sqlite 3 database [3]. this paper deals with the specifics of using c library (sqlite) in c++ program (gamalocal). these specifics result mainly from different function linkage conventions and different approaches to exception handling in c and c++. all work described here was done within my bachelor thesis [4] which was also used as the main source for writing this paper. sqlite and gama-local program gama-local is able to process classic geodetic measurements (distances, angles, ...) and also measurements such as vectors and coordinates. input data can be stored in a gama specific xml file or in a sqlite 3 database file. the same data are stored in both formats. only identifiers used to connect data in sqlite database are not in xml file because in xml file the most of relations between objects are represented by aggregation. geoinformatics fce ctu 2011 73 petras v.: implementation of sqlite database support in program gama-local formerly gama-local supported only xml input. during development of qgama (gama-local graphical user interface) [5] it was realized that sqlite database can be useful for this gui application. its next version will be based on using sqlite. to keep the full compatibility between gui based qgama and command-line based gama-local it was necessary to support sqlite. the support of sqlite database file as an input is sufficient now because only sqlite input is supported in qgama. furthermore, sqlite database provides several advantages for gama-local users. for example, more than one (input) network can be stored in one file which is not possible with current gama-local xml format. database schema used by gama-local can be also used as a part of larger database since other data (tables and columns) are simply and naturally ignored during processing database file by gama-local. this is not true for gama (expat based) xml parser which does not ignore additional relations and values (represented by xml attributes or elements). generally, both xml and sqlite has advantages and disadvantages. for example xml file is in contrast to sqlite database file human readable and editable. hence sqlite in gnu gama is not a replacement for xml input but it is intended to be an alternative for whom xml is not the right option (e.g. they don’t have a good library support). the sqlite support is available in gnu gama version 1.11 [6]. all code related to sqlite is in class sqlitereader and its implementation, so i will often refer to this class or its implementation (e.g. readerdata class). database schema used by gama-local was developed separately and can be obtained with gnu gama distribution. its description can be found in the latest gnu gama manual (available with latest gnu gama version from git repository [7]). sqlite c/c++ api sqlite database has native interface for c and c++ which i will refer as sqlite c/c++ api. sqlite c/c++ api provides several interfaces. i will focus on the two most important interfaces. the first one, which can be called classic, contains functions for executing sql statement and for retrieving attribute values from result (in case of select statement). the second one relies on callback functions. working with classic interface consists of calling prepare functions, execute functions, functions for retrieving attribute values and for finalizing statements. all functions return a return code which has to be checked and in case of an error, an error message should be checked. resulting code can be very long and the using of classic interface can be tedious and error prone. however, classic interface is flexible enough to enable wrapping interface functions by something more convenient. there are several possibilities. in c++ language raii (resource acquisition is initialization) technique can be used. this means that c functions and pointers to structs could be wrapped by c++ class with constructor and destructor. reporting errors by error codes would be probably replaced by exceptions. in c language some wrapper function or functions can be written. actually if you decide to use some wrapper, it is not necessary to write wrapper function on geoinformatics fce ctu 2011 74 petras v.: implementation of sqlite database support in program gama-local your own because it already exists. it is function sqlite3_exec from interface using callback functions. function sqlite3_exec is the only one function in this interface (except functions for opening and closing database and for freeing error message). project gnu gama uses interface using callback functions. it was chosen for implementation because this interface is considered to be the most stable one in terms of changes between versions of sqlite c/c++ api. there were no changes in this interface between the last [8] and the current version of sqlite [9]. simplified example of using a callback function with library function sqlite3_exec is shown bellow. // library function int sqlite3_exec(int(*pf)(void*, int, char**), void* data) { // ... int rc = pf(data, /*...*/); // ... } library function gets pointer to callback function as a parameter pf. the callback function is called through pointer pf. object data given to function sqlite3_exec by pointer to void is without any change passed to the callback function. the callback function is invoked for each result row and parameters are attribute values of one row of a sql query result. a user of sqlite c/c++ api writes the callback function and puts the appropriate code in it. // callback function (user code) int callback(void* data, int argc, char** argv) { // ... get values from argv } all work is done in the callback function, so once a user of sqlite c/c++ api has it, he can simply call function sqlite3_exec and pass the pointer to the callback function and the object represented by pointer data. // main program (main user code) int fun() { readerdata* data = /*...*/ int rc = sqlite3_exec(callback, data); } object represented by pointer data can be used in the callback function for storing data from parameters. but first, pointer data has to be cast from void* to particular type (readerdata* in this case). generally, any type (class) can be chosen depending on the user needs. later i will show how it is used to store information about an exception. using c and c++ together several issues has to be considered when c and c++ are used together. the main issues are dealing with function linkage and exception handling. function linkage is partially solved by (sqlite) library authors. but linkage of callback functions has to be handled by library users. we have to deal with exception handling only when we really use and need exceptions. gnu gama uses c++ exceptions extensively. almost all error states are reported by throwing an exception. callback functions use gnu gama objects and functions. therefore, exception can be thrown in callback function. there is no other option but to deal with exception since geoinformatics fce ctu 2011 75 petras v.: implementation of sqlite database support in program gama-local design decision about using exceptions in gnu gama project was already done. decision was mainly influenced by recommendations from [10]. functions functions in c and c++ have different linkage conventions. functions written in c and compiled by c compiler can be called from c++ but the function declaration has to specify that function has c linkage. the c++ standard specifies that each c++ implementation shall provide c linkage [11]. the c linkage can be specified by extern "c" declaration or by enclosing function declaration with extern "c" block: extern "c" int f(int); extern "c" { int g(int); } common practice used by libraries (c or c++) is to share header files between c and c++. it is achieved using preprocessor: // lib.h: #ifdef __cplusplus extern "c" { #endif void f(); void g(int); #ifdef __cplusplus } #endif c compiler ignores extern "c" block but c++ compiler knows that functions have c linkage and compiler uses this information while linking to library or object file. function pointers the similar rules which apply to functions apply also to function pointers. the standard [11] says: two function types with different language linkages are distinct types even if they are otherwise identical. this means that you have to declare c function pointer type inside extern "c" block and handle c function and c++ function pointers separately. this is an example of declaration taken from sqlitereader implementation: extern "c" { typedef int (*sqlitecallback)(void*, int, char**, char**); } however, gcc [12] provides implicit conversion between c and c++ function pointers. it is allowed as an implementation extension, however doing conversion without any warning and not allowing overloading on language linkage is considered as a bug [13]. function visibility functions in c and c++ have definitions (function body, function code) and declarations (function signature). function definitions are globally visible by default but function declarations have local visibility. declaration is visible from the point of declaration to the end of translation unit (source file with included header files). geoinformatics fce ctu 2011 76 petras v.: implementation of sqlite database support in program gama-local it is necessary to provide declaration to use function defined in another translation unit. this is always done by including a particular header file. note that function declarations can be written by hand since including a header file is textual operation only (but this makes sense only in examples). to avoid name clashes c++ introduced namespaces. although there is no such thing as namespace in c language namespaces can be used with declaration extern "c" together. so the example above can be rewritten using namespace: // lib.h: #ifdef __cplusplus namespace lib { extern "c" { #endif void f(); void g(int); #ifdef __cplusplus } } #endif now if you are using lib.h header file with c++ compiler, you have to specify lib namespace to access functions f and g. how this can be used with existing c header files (e.g. with c standard library) is described in [10]. nevertheless, function f and g are still c functions. this implies that you can provide another declaration without namespace (but with extern "c") and use functions without specifying namespace. this can lead to errors or at least name clashes. in many cases it is suitable to hide function definition (which is global by default), so it is not visible from outside a translation unit. this is, for example, the case of callback functions which are mostly part of an implementation and therefore they shouldn’t be globally visible. the function hiding is done in c++ by unnamed namespace (sometimes incorrectly referred as anonymous namespace) [11]. an unnamed namespace behaves as common namespace but with unique and unknown name (this is done by c++ compiler). however, unnamed namespace and extern "c" cannot be used together. function previously defined and declared as extern "c" in unnamed namespace can be misused or can break compilation, because unnamed namespace behaves as common namespace and extern "c" function declaration without namespace can be provided (e.g. accidentally). example follows. // file_a.cpp: // unnamed namespace namespace { // bar_i intended to be defined local in file_a.cpp extern "c" int bar_i(int) { return 1; } } // file_b.cpp: // bar_i declared (e.g. accidentally) extern "c" int bar_i(int); void test() { // bar_i used (without any error or warning) bar_i(1); } the function hiding can be also done by declaring function static. this is how the hiding is done in c language so it looks appropriately for extern "c" functions. next paragraph geoinformatics fce ctu 2011 77 petras v.: implementation of sqlite database support in program gama-local discuss it. declaring functions in extern "c" block as static works in gcc as expected. i haven’t succeeded in verifying that combination of extern "c" and static works generally on all compilers. as a result, this solution wasn’t used in sqlitereader implementation. instead all callback functions was prefixed. function definitions are visible but prefix should prevent from misusing by accident. exception handling handling error (or exception) states is done in c++ by exception handling mechanism. exceptions have several advantages. better separation of error handling code from ordinary code is one of them. another advantage is that exceptions unlike other techniques force programmer to handle error states. for example, return code can be ignored and if there was an error program stays in undefined state. on the other hand, thrown exception can not be ignored since unhandled exception causes program to crash immediately. c language has no standard exception handling mechanism therefore callback functions called from c library must not throw exception. so a callback function passed to sqlite3_exec function have to catch all exceptions thrown inside its body (or by other functions inside its body). from another point of view, function declared extern "c" has to behave as a c function and naturally c function does not throw any exception. we should be aware of the fact mentioned above that the code in function body is c++ code and c++ code can use exceptions without any restriction. consequently, the callback function has to catch all exceptions. the whole part of function body where exception can be thrown has to by enclosed in try block and the last catch block has to be catch with ellipsis. int callback() { // cannot throw exception try { // can throw exception } catch (std::exception& e) { // handle exception(s) derived from std::exception } catch (...) { // handle unknown exception(s) } } catching all possible exceptions is not enough, it is necessary to report error to callback function caller (it is library function sqlite3_exec in our case). an error can be reported in several ways, in case of sqlite c/c++ api it is returning non-zero return code. error state reporting is solved easily but the problem is how to provide information about the caught exception (its type and additional information contained in exception object). there are some solutions like assigning return code values to particular exception types. the robust solution which keeps information about exception requires to implement polymorphic exception handling and will be discussed later. there is also completely different solution of handling exception when interfacing with c language — to use no exceptions at all. however, we would lose all advantages of using geoinformatics fce ctu 2011 78 petras v.: implementation of sqlite database support in program gama-local exceptions. the second shortcoming of this solution is that exceptions can be already used in code or library we are using. this is the case of the standard library or gnu gama project. polymorphic exception handling polymorphic exception handling requires to implement cloning and also similar technique for rethrowing exceptions. both will be described in this section and additional information can be found in [10]. cloning standard copying by copy constructor cannot by used in cases when object is held by pointer or reference to a base class because actual type of object is unknown. using copy constructor would cause slicing [14]. while handling exceptions, references to base exception class are used and proper copy of exception has to be stored (in sqlitereader implementation). proper copy means that new object has the same set of attribute values as the old one and also new object has the same type as the old one. this is the case when cloning must be used instead of copying by copy constructor. cloning is made by virtual function clone which calls copy constructor. in virtual function actual type of object is known and so the right copy constructor is called. function clone creates new object by calling operator new and returns pointer to a new object (the user of function is responsible for freeing allocated memory). next example shows implementation and simple usage of cloning. class base { public: virtual ~base() { } virtual base* clone() { return new base(*this); } virtual std::string name() { return "base"; } }; class derived : public base { public: virtual derived* clone() { return new derived(*this); } virtual std::string name() { return "derived"; } }; void print(base* b) { std::cout << "name is " << b->name() << std::endl; } void test() { base* d = new derived(); base* b = d->clone(); // creates new object print(b); // prints: name is derived delete b; delete d; } there is still danger that we accidentally copy (by copy constructor) the object we have by pointer to base class. this can be avoided by declaring copy constructor in base class protected. the second thing we should avoid is forgetting to implement clone function in derived classes. this would lead to creating objects of base class instead of derived one. it is helpful to declare clone function in base class pure virtual. unfortunately, this can be geoinformatics fce ctu 2011 79 petras v.: implementation of sqlite database support in program gama-local applied only for abstract base class and it ensures implementing of clone function only for direct subclasses [14]. the same technique which was used to implement cloning can be used more generally to create new objects with various parameters and not only the copy of current and actual object. this technique can be used even for completely different things such as rethrowing exceptions. storing and rethrowing exceptions classes base and derived from previous section will be used here as exception classes. both contain functions for cloning and for getting a type name. both classes also have public copy constructor automatically created by compiler. public copy constructor is necessary to allow throwing exceptions (by throw statement). an example of standard exception handling is in the following listing. try { throw derived(); } catch (derived& e) { std::cout << "derived exception" << std::endl; } catch (base& e) { std::cout << "base exception" << std::endl; } an exception is thrown somewhere in try block. the thrown exception can by caught by one of the caught exceptions. commonly accepted rule is to throw exceptions by value and to catch them by reference. the order of catch blocks is important (first derived classes then base classes). now consider the case we have to catch all exceptions, keep information about exception type and later use this exception. this is the case of callback functions used with sqlitereader class. we can create copy of caught exception and store a copy (e.g. in sqlitereader class attribute) and later (when we can control program flow) we can pick stored exception up. however, the copy cannot be created by copy constructor because actual type of object is not known. according to previous section, obvious solution is cloning. cloning will ensure correct storing of exception by pointer to base class. pointer to base class allows to use functions from base class interface, for example read error message or get class name in our base-derived example. however, sometimes final handling of exception and reporting error to program user is not the right thing to do. if it is not clear how to handle an exception, the exception should be thrown again or better say rethrown. if you try to throw exception directly by throw statement using pointer to base class, you will fail because throw statement uses copy constructor and known type of object is determined by a pointer type. the pointer type is pointer to base class. therefore throw statement will call base class copy constructor and it will slice the object. related code snippets can look like this: base* b = 0; // ... b = caughtexception.clone() // cloning of exception somewhere // ... geoinformatics fce ctu 2011 80 petras v.: implementation of sqlite database support in program gama-local throw *b; // rethrowing of exception direct use of throw statement discards useful information. slicing while rethrowing can be avoided in the same way as slicing while copying. polymorphic rethrowing or simply polymorphic throwing has to be introduced. this will be provided by function raise. this function is very similar to clone function but instead of creating new object it throws an exception. the name raise is more appropriate than the name rethrow because function can be used more generally than only for rethrowing (the name throw cannot be used because it is c++ reserved keyword). the function implementation is the same for all classes and is shown bellow. virtual void raise() const { throw *this; } an exception thrown by this function will has appropriate type because appropriate copy constructor will be used. a usage of this function is obvious and is shown in following code snippet. b->raise(); implementation in sqlitereader class the exception hierarchy used in sqlitereader class is in the source code listing below. abstract class exception::base provides interface for cloning and rethrowing exceptions (functions clone and raise). class is derived from std::exception in order to add function what to interface and to allow handling standard exceptions and specific gnu gama exceptions together when necessary. there are many other exceptions in gnu gama but they are defined and handled in the same way. // inderr.h: namespace exception { class base: public std::exception { public: virtual base* clone() const = 0; virtual void raise() const = 0; }; } // exception.h: namespace exception { class string: public base { public: const std::string str; string(const std::string& s) : str(s) { } ~string() throw() {} string* clone() const { return new string(*this); } void raise() const { throw *this; } const char* what() const throw() { return str.c_str(); } }; } // sqlitereader.h: namespace exception { class sqlitexc: public string { public: sqlitexc(const std::string& message) : string(message) { } sqlitexc* clone() const { return new sqlitexc(*this); } void raise() const { throw *this; } geoinformatics fce ctu 2011 81 petras v.: implementation of sqlite database support in program gama-local }; } the next source code listing shows callback function code which is common for all callback functions in sqlitereader implementation. all callback functions are declared extern "c" and have name prefixed with sqlite_db_ (as explained in previous sections). the code specific for each callback function is placed in the try block. this code can throw any exception. exceptions derived from exception::base will be caught by first catch block, then cloned and stored. exceptions derived only from std::exception will be caught by second catch block. class std::exception doesn’t allow cloning. therefore this exception will be replaced by exception::string with same what message and then stored. shortcoming is discarding exception type. some improvements can be done by introducing new exception to gnu gama exception hierarchy, e.g. exception::std_exception but it is not such a big improvement because it would save only the information that exception was derived from std::exception and actual type of exception would be still unknown. this has the same effect as adding some string to the what message. the last catch block (catch block with ellipsis) ensures that all other exceptions are caught in callback function body. there is no other way than to store exception which indicates unknown exception. extern "c" int sqlite_db_readsomething(void* data, int argc, char** argv) { readerdata* d = static_cast(data); try { // ... callback’s own code return 0; } catch (exception::base& e) { d->exception = e.clone(); } catch (std::exception& e) { d->exception = new exception::string(e.what()); } catch (...) { d->exception = new exception::string("unknown"); } return 1; } the last source code listing also shows how c++ objects are passed through c to c++ functions. this is common problem with simple solution. pointer to void given to c function (and than to callback function) is casted to appropriate type by static_cast. then c++ object can be used as usual. in sqlitereader implementation a wrapper function for sqlite3_exec function was introduced. this wrapper ensures rethrowing of previously stored exception. void exec(sqlite3* sqlite3handle, const std::string& query, sqlitereadercallbacktype callback, readerdata* readerdata) { char* errormsg = 0; int rc = sqlite3_exec(sqlite3handle, query.c_str(), callback, readerdata, &errormsg); if (rc != sqlite_ok) { if (readerdata->exception != 0) { readerdata->exception->raise(); } // ... handle other (non-callback) errors } geoinformatics fce ctu 2011 82 petras v.: implementation of sqlite database support in program gama-local } a clone function creates new instances by calling new operator as well as it is done in catch blocks in callback function. therefore allocated memory has to be freed. deallocation can be done easily in sqlitereader destructor. sqlitereader::~sqlitereader() { // ... if (readerdata->exception) { delete readerdata->exception; readerdata->exception = 0; } // ... } conclusion using c and c++ together information provided in this paper was used for implementing sqlite support in gama-local. however, this paper presents something like reusable design pattern because this information can be used generally when interfacing c++ code with c code or library. the complete source code of sqlitereader class and its implementation can be found in gnu gama git source code repository [15]. gnu gama sqlite database file is an alternative to input xml file. gnu gama users can now choose format which is appropriate for their project. users will also be able to switch from gui application qgama to command-line tool gama-local and back as needed because both programs has the same native format. we will see in the future whether sqlite database file become the main gnu gama format. program gama-local from gnu gama release 1.11 [6] is able to read adjustment data from sqlite 3 database. user documentation and the latest gnu gama version are available in the source code repository [7]. references 1. čepek, aleš. gnu gama manual [online]. 2011-08-16. http://www.gnu.org/software/gama/manual/gama.pdf 2. čepek, aleš; pytel, jan. a note on numerical solutions of least squares adjustment in gnu project gama in: interfacing geostatistics and gis. berlin: springerverlag, 2009, p. 179. isbn 978-3-540-33235-0. 3. sqlite [program]. version 3. 2004-2011. http://www.sqlite.org/ 4. petráš, václav. podpora databáze sqlite pro program gama-local. praha 2011. 75 s. bakalářská práce. čvut v praze, fakulta stavební geoinformatics fce ctu 2011 83 http://www.gnu.org/software/gama/manual/gama.pdf http://www.sqlite.org/ petras v.: implementation of sqlite database support in program gama-local http://geo.fsv.cvut.cz/proj/bp/2011/vaclav-petras-bp-2011.pdf 5. novák, jiří. object – oriented gui for gnu gama. prague, 2010. 63 p. bachelor thesis. ctu in prague, faculty of civil engineering. http://geo.fsv.cvut.cz/proj/bp/2010/jiri-novak-bp-2010.pdf 6. gnu ftp server mirrors. gnu gama release 1.11 http://ftp.sh.cvut.cz/mirrors/gnu/pub/gnu/gama/gama-1.11.tar.gz 7. gnu gama git source code repository http://git.savannah.gnu.org/cgit/gama.git/ 8. the c language interface to sqlite version 2 [online]. modified 2011-09-21. http://www.sqlite.org/c_interface.html. 9. c/c++ interface for sqlite version 3 [online]. modified 2011-09-21. http://www.sqlite.org/capi3ref.html. 10. stroustrup, bjarne. the c++ programming language. special edition. at&t labs, florham park, new jersey. united states of america: addison-wesley, 2000. 1020 p. isbn 0-201-70073-5. 11. iso/iec 14882. international standard: programming languages — c++. 11 west 42nd street, new york, new york 10036: american national standards institute, first edition, 1998-09-01. 748 p. 12. free software foundation, inc. gcc, the gnu compiler collection [program]. version 4.4.3, copyright 2009 free software foundation, inc. http://www.gnu.org/software/gcc/ 13. gcc bugzilla: bug 2316 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=2316 14. sutter, herb; alexandrescu, andrei. c++ coding standards: 101 rules, guidelines, and best practices. 1st edition. addison-wesley professional, 2004. 240 p. isbn 0321113586. 15. gnu gama git source code repository: gama-local directory. files: sqlitereader.h and sqlitereader.cpp. http://git.savannah.gnu.org/cgit/gama.git/tree/lib/gnu_gama/local/ geoinformatics fce ctu 2011 84 http://geo.fsv.cvut.cz/proj/bp/2011/vaclav-petras-bp-2011.pdf http://geo.fsv.cvut.cz/proj/bp/2010/jiri-novak-bp-2010.pdf http://ftp.sh.cvut.cz/mirrors/gnu/pub/gnu/gama/gama-1.11.tar.gz http://git.savannah.gnu.org/cgit/gama.git/ http://www.sqlite.org/c_interface.html http://www.sqlite.org/capi3ref.html http://www.gnu.org/software/gcc/ http://gcc.gnu.org/bugzilla/show_bug.cgi?id=2316 http://git.savannah.gnu.org/cgit/gama.git/tree/lib/gnu_gama/local/ ________________________________________________________________________________ geoinformatics ctu fce 2011 34 methodological aspects of architectural documentation arivaldo amorim universidade federal da bahia, faculdade de arquitetura, lcad rua caetano moura, 121 – federação – salvador – bahia – brasil – 40210-905 alamorim@ufba.br keywords: architectural heritage, architectural documentation, architectural documentation methodology, data capture and processing, multimedia database, heritage information system abstract: this paper discusses the methodological approach that is being developed in the state of bahia in brazil since 2003, in architectural and urban sites documentation, using extensive digital technologies. bahia has a vast territory with important architectural ensembles ranging from the sixteenth century to present day. as part of this heritage is constructed of raw earth and wood, it is very sensitive to various deleterious agents. it is therefore critical document this collection that is under threats. to conduct those activities diverse digital technologies that could be used in documentation process are being experimented. the task is being developed as an academic research, with few financial resources, by scholarship students and some volunteers. several technologies are tested ranging from the simplest to the more sophisticated ones, used in the main stages of the documentation project, as follows: work overall planning, data acquisition, processing and management and ultimately, to control and evaluate the work. the activities that motivated this paper are being conducted in the cities of rio de contas and lençóis in the chapada diamantina, located at 420 km and 750 km from salvador respectively, in cachoeira city at recôncavo baiano area, 120 km from salvador, the capital of bahia state, and at pelourinho neighbourhood, located in the historic capital. part of the material produced can be consulted in the website: < www.lcad.ufba.br> . 1. the context of architectural documentation this paper presents the methodological approach that is being developed since 2003, by lcad the laboratory of computer graphics applied to architecture and urbanism at the architecture school of the universidade federal da bahia through their work for document the architectural heritage in the bahia state, in brazil, using digital technologies [1]. the vast architectural collection stands out as one of the main aspects of the rich cultural heritage of brazil, with important colonial assemblies, such as the ones found in the states of bahia and minas gerais and the city of parati in rio de janeiro, among others. heritages that alongside the modernist brasilia, have symbolic values and represent a significant educational potential. the state of bahia contributes especially to the advance of that legacy, for its valuable historical heritage, arts and culture, one of brazil's richest. inside this huge collection, we highlight urban centers and its architecture, incorporating significant collections of movable and integrated goods of utmost importance for the history of luso-brazilian art. most notably, bahia has a set of preserved urban nuclei that encompasses five centuries of history of the new world, representing the various political moments, the economic and social changes that shaped the country, like the first possession and conquest of territory represented by sets porto seguro, santa cruz de cabrália, salvador and trancoso, the sugar cane and tobacco cycle cachoeira, são felix and maragogipe; the gold cycle, in the eighteenth century jacobina and rio de contas; the diamond cycle in the nineteenth century lençóis, mucugê and igatu [2,3]. geographically, these settlements cover coastal areas of recôncavo and the chapada diamantina. however, this important asset as indeed the whole of brazil is under continuous threat due factors such as urban sprawl and land speculation, the socio cultural changes and new values resulting from them, and subject to degradation and all kinds of accidents. despite the importance and fragility of this collection, most of it are in state of disrepair or lack of specific care, that impairs its preservation. over the last decades, many conservation actions have been implemented (legal protection, recovery and restoration works), but none of them have resulted in effective conservation of such property, whether by insufficiency of human and financial resources, either by the lack of a strategic planning policy that articulates the various social actors through democratic participation. maybe, much of the population did not recognize the importance of this heritage. the project of architectural documentation of historical sites and monuments can be understood as a complex process of systematic and comprehensive planning, acquisition, http://www.lcad.ufba.br/ ________________________________________________________________________________ geoinformatics ctu fce 2011 35 processing, indexing, storage, retrieval, dissemination and delivery of data and information about single buildings or sets of them, including graphical and non-graphical information and metadata for various uses. besides the more obvious applications in conservation and restoration of the buildings, the architectural documentation plays a vital role in preserving the memory of this heritage. this is a highly relevant aspect, given the impossibility of the physical preservation of all significant samples. there are several reasons for this, from the simple effect of time and weather, to more serious and dangerous causes such as heavy rains and floods, fires, earthquakes, neglect, abandonment and vandalism, among others. in brazil, this is particularly worrying considering that except for few examples, most buildings from the colonial period were composed of raw earth and wood, materials that are completely destroyed by water and fire. as explained, by their nature and extent, the documentation of architectural heritage in brazil is a strategic issue and involves a great effort for its accomplish. the actions proposed aim to safeguard these monuments. as a result of the documentation project, it is generated a huge multimedia database containing information of buildings such as photographs, photographic panoramas, rectified photographs, orthophotos, technical drawings, various types of 3d geometric models, including point clouds, and other kind of data such as videos, audio tapes, interviews, reports, pictures and historical texts, among others. all steps in this process involve the comprehensive and intensive use of digital technologies. thus, the methodology proposed comprehends five main parts summarized here: ● the overall planning stage, that consider all aspects of the work and the objective conditions for it, as well as the financial support and other resources; ● data acquisition and field work, when the primary or raw data are captured from in situ studies or compiled from secondary sources, which also involves some other technologies; ● data processing and analysis, including handling or manipulation, when the data (primary or secondary) collected are processed to generate the desired products or information and their metadata; ● management of data including indexing, storage, retrieval, data security, access, dissemination and publication of the data and information produced for concerned public and, finally, ● control and documentation of the project, in which should be analyzed in the various aspects involved in the project's implementation, as well as the assessment procedures used, and product quality grades, and also indicators of income, essential to assist in the planning of future works. this set of phases represents a scientific methodological approach for a documentation project in order to achieve best results and the best practices. so, in the development of these activities it is required a set of digital technologies in every related step. 2. global planning every documentation project should be initiated by the global planning of the work, from the earliest stages comprising one or more of three main focuses, namely: the focus of the contractor, usually a public agency that hires and establishes the work to be done, in their qualitati ve and quantitative aspects, being the main elements of this stage the edict and the terms of reference of the work. complementary to this approach there is the focus of the performer, usually a private company that holds possession of these documents, prepare a thorough and detailed work plan in order to achieve the same under conditions of price, quality and deadlines established. finally, in a different approach from the previous two, we can establish the so-called academic approach, adopted in academic studies that involve teaching, research and extension, which covers some issues of the two approaches mentioned above. however, there are specific features, which particularize the research done in brazilian universities: few technical and financial resources available, undefined scopes and terms, and the absence of a contractor who establishes the general context for the work to be performed. this paper adresses all these aspects. thus, the scope of the architectural documentation project includes the general characterization of the work being undertaken in their quantitative and qualitative aspects, considering the study of the physical site, the purpose of the surveys, the specification of products to be obtained, data and media formats and execution schedule. thus, before the actual start of field work it is necessary a careful inspection of the physical site where the architectural documentation project will be developed with the purpose of checking the actual conditions for carrying out all activities involving the recognition and demarcation of the project physical area, counting the buildings to be surveyed, as well as verification of the specific conditions of each of them. moreover, it is necessary identify additional resources to perform the work, such as scaffolding, for example. also, related to the physical site, there is the need to obtaining prior authorization from the authorities and owners to carry out the survey, and the evaluation of climatic conditions, so that work should not be paralyzed by bad weather. the clear definition of the work purpose is crucial for the development of products to be obtained, and the technologies to be used in the different phases of the documentation project. the products could be as varied as possible, since the traditional technical drawings for cadastral survey made by hand or through more sophisticated resources such as photogrammetry. the kind of product to be achieved depends crucially on the needs and aims of the project, along with the technologies to be employed. a particularly interesting record of the monuments is through three-dimensional numerical representations, such as geometric models of edges, ________________________________________________________________________________ geoinformatics ctu fce 2011 36 surfaces, solids or even point clouds. when the technologies are employed to survey the three-dimensional data, usually two-dimensional products such as technical drawings, orthophotos, rectified photos and mosaics are obtained from the processing of primary data. so, depending on the work purpose, other technologies could be employed for the surveying of monuments, such as the photographic panoramas, photographs and videos, extremely useful and important to capture the general aspects of the works, such as context and temporality. depending on the survey purpose and phase of project documentation several technologies will be employed in the survey field, in data processing in laboratories, storage, publication and dissemination of information produced. it should, therefore be assessed the resources available for the work, considering the financial, human and technological aspects, so that the expected results can be produced within the requirements of time, cost and quality. finally, control mechanisms should be designed to follow work execution, whether from the standpoint of quality assurance of products, time schedule and financial resources available. it is also necessary to establish productivity and quality indicators to assess the work from a scientific bias, i n order to provide experience and reliability to conduct similar projects. this allows a careful evaluation of the methodologies and technologies that are employed, a particularly important issue considering the technological evolution. nowadays, the technological and methodological approaches are practically new in each survey, thing that did not happen until the 1980s. this generates a degree of uncertainty for the results, since each experience is unique. 3. data acquisition undoubtedly, the evolution of digital technologies have profoundly impacted the documentation of cultural heritage. and one of the steps that have undergone major transformation was perhaps the stage of primary field data collection, given the diversity and potential of the technological apparatus that was developed, and is available at increasingly affordable prices. the traditional method of surveying buildings using direct measurement with sketches remains extremely important and useful for its simplicity and low-cost tools such as tape measures, plumb-lines and levels. nowadays, this method can be enhanced by the use of measurement instruments based on digital microelectronics such as digital tape measures, laser levels, goniometers and plumb-bobs. these tools increase the accuracy of the survey and reduce the time required to perform the measurement. thus, this method is still valid and popular for b uildings with simple shapes (with few polyhedric faces), without many ornaments and small, especially in relation to height. buildings with complex floor plans, with many non-right angles, always represent a problem for the survey, due the difficulty for measuring and controlling the deformation of these angles without the aid of topographic instruments. another limitation of this method are great height buildings, due the need of scaffolding, expensive and time consuming to assemble, besides the risk of accidents that they always represent. a way to address the deficiencies pointed out for the direct measurement is using, whenever necessary, topography to determine points of difficult access in the surveys, as well as for measuring angles in irregular shapes with the aid of a closed topographic polygonal, to allow compensation for errors and ensure the accuracy of linear and angular dimensions. moreover, the topography is a method for calculating the coordinates of points that are inaccessible to direct measurement. although it was a good solution, by directly measuring the association with topographical methods, the process of determining the coordinates of the point of surveying instruments was too laborious which was a complicating factor for its use, when many points were needed to be determined. with the automatization of surveying instruments, determining the coordinates of points, distances and angles has been greatly simplified as the calculations and notes are made by the technological apparatus. although the combined use of these two techniques performs very well, and solves most problems, special situations and new requirements imposed the development of new technologies and surveying techniques that were only possible through the digital electronics. as an example of particular situations, we can mention buildings with non-polygonal shapes and complex ornamentations in their exterior or interior. as new demands there is the need for threedimensional representation of the monuments at the computing environment. also another technological possibility for use in documentation of sites is the gnss global navigation satellite system, the best known and most popular segment is the gps global positioning system, maintained by the u.s. government. although it may have an important application in particular cases, in most cases this technology comes with a secondary role, providing the georeference of sites and monuments. considering this, we will refrain from further considerations. another "classic" and well known technique used in the survey of buildings is the photogrammetry, which went through several phases of development: graphic, analog, analytic, and digital. the digital photogrammetry broke the paradigms of photogrammetry in its other previous stages involving the use of extremely expensive and specialized equipment, and skilled workforce (photogrammetrists). these characteristics have made the use of these techniques inexpressive in brazil for purposes of architectural documentation so far. the advent of digital photogrammetry allowed a significant simplification of the procedures, reducing the need for specialized labor, so the process can be used by architects and engineers. at the same time, the financial resources necessary to purchase equipments such as precision cameras and specific programs and computers are often smaller than the investment required previously. so, "classical" photogrammetry solved the problem of registration of complex shapes using stereoscopy and representation through t he use of isovalue curves, while the digital photogrammetry allowed the generation of different types of geometric models, ________________________________________________________________________________ geoinformatics ctu fce 2011 37 besides the traditional photogrammetric products such as orthophotos, rectified photos, mosaics, and technical drawings. however, photogrammetry still has some problems, especially regarding the taking of photography, whether on account of obstructions on the object to be surveyed, inadequate visuals, or alternatively, the difficulty in photographing the top of very high buildings. in recent decades, 3d laser scanning appears and is optimized with its great versatility for capturing any type of shape and amazing speed of data acquisition. this technology produces detailed geometric models in point clouds, realistic or false color. although the 3d laser scanning technology raises the surveying of buildings to another level, the costs for its use in architectural documentation are still prohibitive in brazil. in addition, there is the need of processing a huge amount of points, which will be discussed in the next section. the primary products produced by this type of technology are the geometric models of points and surfaces, as well as orthophotos. other products are possible processing of primary data. the great advantage of this technique is the speed of data capture in the field, and the possibility of working in the dark, if it is not necessary the capture of the surface texture of the object [4]. finally, constitutes the state of the art in building survey, the production of point clouds from photographic methods using techniques of photogrammetry, pattern recognition, image processing and computer vision [5]. these techniques have been called dense stereo matching [6] or photo-based scanning [7], or dense surface model dsm [8]. although they have significantly lower prices than 3d laser scanning, this technology is still under development. the output of the geometric point models still requires too much processing and the models are still left with many gaps. the experiments known in architectural documentation present only partial models [9]. however for reasons pointed before, the technology is promising and should produce more significant results in the near future. its main advantages are low cost compared to laser scanning and reduced working time in the field. the technology has been successfully used in the production of models of small objects like statuary. besides the technologies previously mentioned employed in the capture of data to produce technical documentation to be used for purposes of conservation and restoration projects, or even for filing as safeguard, there are other digital technologies that have great potential to represent the building, its surroundings and the temporal context in flexible and versatile ways for other types of application, namely the photographic panoramas, videos and movies and audio testimonials. 4. data processing in architectural documentation, the data processing stage is basically the transformation of raw data gathered in field into the desired end products. another advantage of digital technologies is the possibility to generate several products from the primary data collected, depend on processing quantity, using automated, semi-automatic or manual methods. these processes involve various types of tools and use of skilled labor and depend primarily on the type of technology and products to be generated. however, additional care should be taken since the capture of the primary data determines the final result, then emphasizing the importance of the overall planning of the work. we discuss below the main aspects of digital technologies used to produce data within the aimed products´ generation processes. in surveys conducted using direct measurement or topographical methods, data processing is done through cad tools like geometric design editors or modelers. in these cases, the process is manual, interactive, time-consuming and subject to misinterpretation. boards of technical drawings or geometric models for online viewing on the web are the final results. moreover, these models can be used to generate other products such as 3d animations and overviews using techniques of image synthesis or studies using numerical simulations. when data acquisition is done by photogrammetry, data processing consists in processing the photos and other data gathered in the field through the restitution models implemented in software algorithms, aiming to generate products of photogrammetric restitution, like orthophotos, rectified photos, mosaics, technical drawings, wireframe or surface geometric models. these models can be represented with the object original textures, in false color or shaded. the difficulties lie in the photogrammetric methods of data processing phase. while these methods are very accurate, powerful and versatile, the data processing is still done through interactive methods where the operator's experience and accuracy are essential. efforts are being made for process automation, but the generation of useful finished products still requires much human labor. when 3d laser scanning is used, data treatment consists initially in record of various point clouds (partial models) to obtain the complete model. based on this model, operations of segmentation, filtering and sorting are carried out to generate geometric models of surfaces, orthophotos, technical drawings and various other products. through photogrammetric techniques and image processing it is possible to associate the coordinates of captured points to their true color by resampling the point cloud on high-resolution photographs taken from angles very close to those used for capture of point clouds. the main difficulty of this survey technique is exactly the processing of point cloud, due the size of files and the amount of processing needed, which requires robust machines and skilled labor. however, the technology is evolving rapidly. in the surveying technologies that involve the acquisition of point clouds from photographs, known as dense stereo matching, photo-based scanning, or dense surface model, the process for obtaining this geometric model is carried out almost automatically after reported to the processing system the parameters to be used in the operation. after obtaining this basic point cloud model the derivatives are generated, as surface or solids models, technical drawings and orthophotos. however, these second generation products still require much specialized human labor. all these mentioned technologies have significant application in architectural documentation, although they might be more ________________________________________________________________________________ geoinformatics ctu fce 2011 38 suitable and effective in specific situations. the studies and experiments done so far and the results indicate that a single technology is not sufficiently versatile and efficient to serve all ranges of existing applications. thus, the technology to be used during both data acquisition and processing will depend on many factors such as characteristics of the application, implementation deadlines, and technological, human and financial resources, besides the team experience. at least, discuss the later stages in compiling the data, the development of applications that use the information produced, by both the vastness of the possibilities, as the diversity and sophistication of available technologies are beyond the scope of this paper, considering the space to discuss them in a minimally adequate manner. 5. data management regardless of immediate applications that led to the execution of the survey, once data is produced, it needs to be indexed, stored and preserved for later use. and to ensure that these data can be used effectively they must be disclosed, published and retrieved. that closes the cycle that encompasses planning, gathering, processing, indexing, storage, publication, dissemination, retrieval and use of data and information on buildings and architectural ensembles. information which in turn will influence the conservation and interventions on these sites by generating a new cycle of documentation, to be repeated indefinitely throughout the lifetime of the building and sometimes even after their destruction. the set of operations which includes indexing, storage, preservation, publication, dissemination and retrieval, performed on the multimedia database and its metadata, is called data and information management. this set of operations, perhaps the most complex, given the amount of different types of knowledge needed, technologies and professionals involved in its planning and execution, as discussed below. profesionals of information science will contribute to the knowledge for identifying (nomenclature), index and store documents and metadata in order to facilitate the retrieval of these documents and their use. this involves the definition of keywords, descriptors, rules for document classification and nomenclature of the same, and the establishment of flows to query, update, maintenance, access restriction, and other types of operations applied to documents contained in the database. the computer science professionals will act in design and implementation of a multimedia database able to promote the use, maintenance, integrity, security and efficiency of operations conducted by users on the database with specifications set, even in the final stage or intermediate stages of the process. so, there are established business rules, requirements for information integrity, the hierarchy of data access, transactions and access records, and all other control that are necessary and sufficient to ensure efficient use of the system. moreover, other professionals such as web designers, webmasters and others would act in the web interface between the multimedia database and end users, providing appropriate conditions for online access and queries to the various types of information stored. in order to achieve this, it is necessary to make use of various types of viewers and plugins that allow different media and data storage formats visualization, besides traditional resources for web interfaces for database applications implementation. although this idea is particularly attractive, the construction of a large database to contain and maintain multimedia data of the architectural heritage, there are some issues that need to be better thought out to ensure data access in the future. more than the physical security of data, a problem overcome by technical backup and data security, it is necessary assures that the data could be read in the future. the velocity of technological evolution, diversity of media, file formats and their versions, and the obsolescence of programs and devices for reading and writing that occurred in recent past shows that the guarantees to preserve compatibility are far from being a solved issue. 6. control and meta-documentation as a final step of the proposed and used methodology, there is the control and documentation of the processes, so called meta-documentation, which involves the production of partial reports and final reports. it is essential do a critical evaluation of products and processes in order to produce qualitative and quantitative performance indicators of activities and generated products, which could guide new documentation projects planning and implementation. moreover, they should validate the use of those technologies in practical applications in specific areas. at this stage, a particularly useful control mechanisms is the generation of thematic maps for monitoring work progress implemented through gis tool, which allows the spatialization of the work in case of many units to be surveyed or very large or complex buildings. such maps can be converted into illustrated spatial schedules. 7. conclusions this work aims to divulgate the perceptions accumulated in conducting a series of scholarly works in order to discuss and refine them with other expert‟s participation, contributing to the improvement work methodologies as well to the diffusion of scientific knowledge. as explained here, the documentation process for architectural and urban sites, by the amount of variables involved and resources allocated is a complex and multidisciplinary activity, involving traditional disciplines such as architecture, design, survey methods, history and art history and strongly combining digital ________________________________________________________________________________ geoinformatics ctu fce 2011 39 microelectronics, computer science and information science, involving significant technological, financial and human resources. finally, we must assure that in the eagerness to produce extensive documentation, poorly understood and poorly drafted, one does not lose the focus of the problem, i.e., instead of further efforts for monuments physical preservation, these efforts should be divided for the preservation and re-working in digital media. therefore, it is research institutions and groups´ responsibility to find solutions and produce specifications for use and creation of applications and file formats that can ensure the preservation of digitalized heritage. these are open questions that need to be addressed soon. 8. acknowledgements we want to register our acknowledgments to everyone who has contributed in any way for the works here related. we would like to thank specially to cnpq, the brazilian agency for support science and technology, for the scholarships and for financial support for the research group. 9. references [1] amorim, a. l.: documenting architectural heritage in bahia – brazil, using digital technologies. proceedings of cipa 2007, athens, september 2007. [2] bahia, inventário de proteção do acervo cultural da bahia. monumentos do município de salvador. v. 1. salvador, 1975. [3] bahia, inventário de proteção do acervo cultural da bahia. monumentos e sítios da serra geral e chapada diamantina. v. 4. salvador, 1980. [4] amorim, a. l., chudak, d.: patrimônio histórico digital: documentação do pelourinho, salvador – ba, com tecnologia 3d laser scanning. proceedings of sigradi 2005, lima, november 2005. [5] scharstein, d., szeliski, r.: a taxonomy and evaluation of dense two-frame stereo correspondence algorithms. international journal of computer vision 2002, 47(1/2/3), 7–42. [6] hullo, j. f., et al.: photogrammetry and dense stereo matching approach applied to the documentation of the cultural heritage site of kilwa (saudi arabia). proceedings of cipa 2009, kyoto, october 2009. [7] walford, alan.: a new way to 3d scan: photo-based scanning saves time and money, http://www.photomodeler.com/downloads/ scanningwhitepaper.pdf, 2010-10-10. [8] hutton, t, et al.: dense surface point distribution models of the human face. ieee journal 2001, 153-160. [9] lima, j. f. s., et al.: levantamento da portada das igrejas de são francisco e do rosário com nuvens de pontos. proceedings of arq.doc 2010, salvador, december 2010. svg v kartografii svg v kartografii otakar čerba department of mathematics, geomatics section faculty of applied sciences, university of west bohemia e-mail: ota.cerba@seznam.cz kĺıčová slova: svg, xml, digitálńı mapa, internet, mobilńı zař́ızeńı abstakt v červenci 2005 se ve španělském městě a coruña konala dvacátá druhá mezinárodńı kartografická konference. ve svém př́ıspěvku definoval předseda komise pro mapy a internet mezinárodńı kartografické asociace (commission on maps and the internet, international cartographic association / association cartographique internationale) prof. michael p. peterson čtyři základńı směry, kterými by se měl ub́ırat výzkum v oblasti digitálńı kartografie v prostřed́ı internetu: � internet map use, � internet map delivery, � internet multimedia mapping, � internet mobile mapping. cı́lem tohoto př́ıspěvku je ukázat svg (scalable vector graphics) jako pravoplatného člena rodiny technologíı pro tvorbu digitálńıch map, konkrétně pro tzv. internet mapping. jednotlivé části se věnuj́ı představeńı svg, možnostem využ́ıváńı svg v současné kartografii s přihlédnut́ım k bod̊um z výše uvedeného seznamu, přednostem a nedostatk̊um současné verze svg a také r̊uzného aplikakčńıho software. článek také obsahuje výčet možnost́ı tvorby map ve formátu svg, včetně jejich stručného zhodnoceńı. úvod svg představuje otevřený formát určený předevš́ım pro popis a distribuci dvourozměrných vektorových dat v prostřed́ı internetu. v oblasti digitálńı kartografie, předevš́ım v internetové kartografii, se s svg setkáváme stále častěji. standard svg vytvář́ı od roku 1998 world wide web consortium (w3c)1. svg 1.0 źıskalo status w3c recommendation (doporučeńı organizace w3c), které se de facto rovná standardizaci, v zář́ı 2001. od 14.1.2003 je k dispozici verze 1.1 (specifikace svg 1.12), která je dnes všeobecně uznávaným standardem. verze 1.1 se zaměřuje předevš́ım na aplikováńı svg na méně výkonná mobilńı zař́ızeńı. proto došlo k rozděleńı (modularizaci) celé specifikace svg a vznikly dva nové profily svg tiny (svgt) a svg basic (svgb), které jsou souhrně označovány jako svg mobile profiles (mobile svg). svgb se orientuje na zař́ızeńı typu pda 1 http://www.w3.org/ 2 http://www.w3.org/tr/svg11/ geinformatics fce ctu 2006 112 http://www.w3.org/ http://www.w3.org/tr/svg11/ svg v kartografii (personal data assistant) nebo smartphone. z p̊uvodńı specifikace byly odstraněny některé filtry a použit́ı ořezových cest. svgt jako podmnožina svgb je určeno předevš́ım pro mobilńı telefony. z toho d̊uvodu byla vypuštěna podpora css styl̊u, filtr̊u, skript̊u, gradient̊u, vzork̊u a pr̊uhlednosti. ve stádiu př́ıpravy (w3c working draft) se nalézá verze 1.2 scalable vector graphics (svg) full 1.2 specification3. svg je schématem odvozeným ze standardu sgml/xml (standard generalized markup language / extensible markup language). dı́ky tomu může svg komunikovat se všemi aplikacemi a technologiemi na stejné bázi. kromě vlastńıho xml lze využ́ıt např́ıklad gml (geography markup language), xhtml (extensible hypertext markup language), mathml (mathematical markup language), xforms, smil (synchronized multimedia integration language), xslt (extensible stylesheet language transformations), xsl fo (extensible stylesheet language formatting objects), dom (document object model) a mnohé daľśı. nav́ıc pokud má uživatel zkušenosti s nějakou sgml/xml aplikaćı, pak základy práce s svg na něj nekladou žádné zvláštńı požadavky ani z hlediska času, ani z hlediska studijńı náročnosti. ze sgml/xml přeb́ırá svg řadu výhod, pro které si źıskává ve světě informačńıch technologíı řadu př́ıznivc̊u. jedná se předevš́ım o formu zápisu, snadné propojeńı s jinými aplikacemi, jednoduché přizp̊usobeńı potřebám uživatele a jednoduchá pravidla pro už́ıváńı (podrobněji viz [čer2006]). soubor ve formátu svg neńı binárńı (pro př́ıznivce binárńıch formát̊u existuje komprimovaná verze svgz použ́ıvá se komprese gzip, tedy open-source varianta známého zip algoritmu), ale jde o běžný textový ascii (american standard code for information interchange) soubor. zápis v textové formě je snadno čitelný a také srozumitelný pro běžného člověka, nikoli jen pro it specialistu. ascii text je nezávislý na konkrétńı platformě a technologii se stejným svg souborem mohou pracovat uživatelé r̊uzných operačńıch systémů i r̊uzných typ̊u zař́ızeńı. textový formát je univerzálńı, a proto nezastarává jako některé proprietárńı formáty. důležitá je také možnost prohledáváńı dokument̊u. zápis svg souboru umožňuje také vyhledáváńı textu uvnitř obrázk̊u např́ıklad systém google dokáže indexovat svg elementy . integrace svg do vlastńıch aplikaćı je bezproblémová svg je otevřená technologie. kterýkoli uživatel má možnost svg využ́ıvat a také vytvářet si vlastńı podmožiny svg (obecně nové formáty založené na xml). integraci do jiných aplikaćı, př́ıpadně s jinými dokumenty podporuje fakt, že sgml/xml neńı jediný formát, ale jedná se velice širokou skupinu technologíı, které lze mezi snadno propojit pomoćı vazeb, většinou definovaných opět pomoćı sgml/xml. ke kladným reakćım uživatel̊u na svg mapy přispěla také podpora znakového kódováńı unicode, které obsahuje v́ıce než 38 000 znak̊u světových abeced. pro tv̊urce map tedy neńı problémem vytvářet texty, včetně mapových popisk̊u v nejr̊uzněǰśıch exotických jazyćıch. výhodou formát̊u založených na standardu sgml/xml je možnost automatické kontroly syntaktické správnosti dokumentu pomoćı specializovaných programů validátor̊u. kontrola je možná d́ıky propojeńı s tzv. schémovými jazyky (relax ng, xml schema, dtd, schematron a daľśı). pomoćı schémových jazyk̊u (opět často založených na sgml/xml) uživatel může definovat řadu pravidel (povolené elementy a atributy, vazby mezi jednotlivými prvky a dokumentu, datové typy, omezeńı rozsahu př́ıpustných hodnot, integritńı omezeńı apod.), 3 http://www.w3.org/tr/svg12/ geinformatics fce ctu 2006 113 http://www.w3.org/tr/svg12/ http://www.w3.org/tr/svg12/ svg v kartografii př́ıpadně přidávat a upravovat pravidla již existuj́ıćıch schémat. tato pravidla určuj́ı libovolnou podmnožinu jakéhokoli sgml/xml formátu pro vlastńı použit́ı. svg dokumenty lze snadno převádět mezi sebou a také do jiných formát̊u. pro zápis transformačńıch pravidel slouž́ı stylové jazyky. vizualizačńı vlastnosti lze přiradit svg dokument̊um pomoćı kaskádových styl̊u [čer2005]. svg dokument lze převést na svg dokument s odlǐsnou strukturou (např. odstraněńı popisk̊u, výběr konkrétńıho typu prvk̊u apod.). také je možná transformace na jiný sgml/xml často se použ́ıvá metoda generováńı map v svg z gml (geography markup language) dat (viz xslt transformace). je možná i transformace obrácená, kdy lze z mapy źıskat např́ıklad souřadnice jednotlivých prvk̊u a uložit je v libovolném sgml/xml formátu nebo v ascii textu. konečně je možné také upravit svg do formátu, který bude vhodněǰśı pro tisk (např. pdf). k těmto operaćım se použ́ıvá transformačńı jazyk nazývaný xsl (extensible stylesheet language). podrobněǰśı informace o svg jsou k dispozici na stránkách w3c, kde se také nacháźı specifikace jednotlivých verźı a daľśı materiály. svg & internetová kartografie v červenci 2005 se ve španělském městě a coruña konala dvacátá druhá mezinárodńı kartografická konference4. ve svém př́ıspěvku definoval předseda komise pro mapy a internet5 mezinárodńı kartografické asociace6 (commission on maps and the internet, international cartographic association / association cartographique internationale) prof. michael p. peterson z university of nebraska čtyři základńı směry, kterými by se měl ub́ırat výzkum v oblasti digitálńı kartografie v prostřed́ı internetu: 1. internet map use (použ́ıváńı internetových map), 2. internet map delivery (distribuce map na internetu), 3. internet multimedia mapping (propojeńı map na internetu a multimédíı), 4. internet mobile mapping (intertové mapy na mobilńıch zař́ızeńıch). následuj́ıćı části se věnuj́ı aplikaci svg s přihlédnut́ım k jednotlivým bod̊um z výše uvedeného seznamu. internet map use podle [pet2005] je ćılem výzkumu je šetřeńı v oblasti nár̊ustu uživatel̊u internetu, nár̊ustu uživatel̊u map na internetu, metod použ́ıváńı map na internetu a př́ıstup̊u ke zlepšeńı použ́ıváńı map na internetu. použ́ıváńı svg mapy je poměrně jednoduché. uživatel obvykle nepotřebuje žádný speciálńı software pro prohĺıžeńı map a pro práci s těmito mapami. v praxi se použ́ıvaj́ı dva základńı zp̊usoby zobrazováńı svg soubor̊u speciálńı prohĺıžeče (batik) nebo klasické prohĺıžeče 4 http://www.icc2005.org/ 5 http://maps.unomaha.edu/ica/ 6 http://www.icaci.org/ geinformatics fce ctu 2006 114 http://www.icc2005.org/ http://www.icc2005.org/ http://maps.unomaha.edu/ica/ http://www.icaci.org/ svg v kartografii webových stránek (mozilla firefox, opera, internet explorer, konqueror, safari, amaya). prohĺıžeče mohou svg podporovat nativně (např. mozilla firefox, konqueror, amaya, safari nebo opera; př́ımou podporu svg by měla nab́ızet i sedmá verze programu internet explorer) nebo pomoćı pluginu, který je nutné do prohĺıžeče nainstalovat. nejpouž́ıvaněǰśı kombinaćı je propojeńı prohĺıžeče microsoft explorer s modulem adobe svg viewer. tento prohĺıžeč lze nainstalovat i do jiných prohĺıžeč̊u (mozilla firefox, opera) a existuj́ı i verze pro mac os x, linux nebo solaris. adobe svg viewer představuje v současnosti nejpouž́ıvaněǰśı a nejkvalitněǰśı aplikaci pro prohĺıžeńı svg dokument̊u. také množina podporovaných svg element̊u a atribut̊u je v př́ıpadě adobe svg viewer nekomplexněǰśı. na druhou stranu je třeba poznamenat, že firma adobe si přidala do svg standardu řadu atribut̊u, které ostatńı prohĺıžeče ani svg standardy nepodporuj́ı. většina programů pro práci svg (prohĺıžeče, editory, validátory, konverzńı nástroje apod.) je nav́ıc k dispozici zdarma v rámci nejr̊uzněǰśıch otevřených licenćı. to znamená, že tvorba map i jejich použ́ıváńı znamená pro autory i uživatele minimálńı náklady. svg je velice atraktivńı i z hlediska interaktivity. některé prohĺıžeče uživateli nab́ıźı některé předdefinované základńı funkce, které jsou pro uživatele digitálńıch map d̊uležité. např́ıklad adobe svg viewer nab́ıźı změnu měř́ıtka (zooming) a posun (panning), přičemž tyto funkce jsou př́ıstupné kombinaćı tlač́ıtka myši a kláves nebo prostřednictv́ım kontextového menu. mnohem univerzálněǰśı je použ́ıváńı skriptovaćıch jazyk̊u. s jejich pomoćı a s pomoćı objektově orientované reprezentace xml dom lze realizovat výše uvedené funkce i náročněǰśı operace s mapou, jako je např́ıklad přeṕınáńı vrstev. jednotlivé elementy svg dokumentu lze provázat pomoćı odkaz̊u s daľśımi prvky (webové stránky, jiné svg dokumenty apod.). t́ımto zp̊usobem lze svg mapu provázat s legendou, datovou tabulkou, grafy nebo exterńımi dokumenty. nejv́ıce použ́ıvaným skriptovým jazykem je ecmascript (european computer manufacturer’s association), který představuje standardizovanou verzi javascriptu. z hlediska skriptováńı svg dokument̊u je zaj́ımavá iniciativa e4x (ecmascript pro xml) podporu tohoto standardu nab́ıźı např́ıklad prohĺıžeč mozilla firefox 1.5. ačkoli je svg standardizováno od roku 1998, výrobci software většinou nepodporuj́ı kompletńı specifikaci, ale pouze nějakou podmnožinu. např́ıklad prohĺıžeč www stránek opera, která do své aplikace integrovala prohĺıžeč firmy ikivo, podporuje od verze 8 pouze profil svg 1.1 tiny. to znamená, že v opeře nelze pracovat s grafickými filtry, skripty, přechody barev, výplňovými vzorky, pr̊uhlednost́ı, symboly, ořezovými cestami, maskováńım ani s některými složitěǰśımi elementy pro zobrazeńı textu (, ). opera, resp. svg 1.1 tiny nepodporuje ani formátováńı pomoćı kaskádových styl̊u. také prohĺıžeč mozilla firefox, který přǐsel s př́ımou podporou svg od verze 1.5, nepodporuje kompletńı svg 1.1. k dipozici nejsou elementy práci fonty, většina filtr̊u a také chyb́ı podpora animaćı. tyto nedostatky by měly být podle tv̊urc̊u odstraněny, ćılem je kompletńı podpora svg 1.1. také autoři opery předpokládaj́ı, že devátá verze bude obsahovat podporu stadardu svg 1.1 basic. internet map delivery výzkum v této oblasti se soustřed́ı na nalezeńı lepš́ıch metod pro přenos map v prostřed́ı internetu, předevš́ım studium nových internetových protokol̊u a grafických souborových formát̊u pro kartografické aplikace [pet2005]. geinformatics fce ctu 2006 115 svg v kartografii do této kategorie spadá výzkum svg z hlediska vhodnosti pro tvorbu internetových map. v současnosti se ukazuje, že producenti digitálńıch map konečně zač́ınaj́ı svg využ́ıvat ve větš́ı mı́̌re a že svg neńı jen kuriozitou nebo experimentálńım formátem použ́ıvaným v univerzitńım prostřed́ı. proč se svg prosazuje stále ve větš́ı mı́̌re na trhu s internetovými mapami? na tuto otázku nejlépe odpov́ıdaj́ı vlastnosti tohoto formátu. základem každého svg dokumentu jsou tři libovolně kombinovatelné prvky vektorové elementy, text a rastrové obrázky. uživatel může vhodným zp̊usobem zobrazit vektorové prvky mapy, u nichž při změně měř́ıtka nedojde k rozostřeńı hran, jako v př́ıpadě rastr̊u a neńı tedy zapotřeb́ı vytvářet pohledové pyramidy jako v př́ıpadě některých rastrových map. vektorové elementy (např́ıklad komunikace, hranice, polygony apod.) mohou být doplněny rastrovým podkladem (např́ıklad ortofotomapa nebo zditalizovaná historická mapa) a textovými popisky. daľśı výhodou svg je malá velikost soubor̊u. ačkoli současné technologie umožňuj́ı rychlý přenos a zobrazováńı, přesto velikost soubor̊u je pro velkou část uživatel̊u limituj́ıćım faktorem. pokud bude svg použito pro mapy s velkým pod́ılem vektorových prvk̊u, pak je srovnáńı velikosti a kvality soubor̊u mezi svg a libovolným rastrovým formátem bezpředmětné. mnohem zaj́ımavěǰśı situace nastává v př́ıpadě porovnáńı svg a jeho největš́ıho konkurenta na poli vektorové grafiky formátu shockwave flash (swf). swf dokumenty jsou binárńı a proto jsou menš́ı než stejné soubory svg. svg ovšem může existovat i v komprimované variantě svgz pak je velikost výsledných soubor̊u podobná [hel2004]. mezi daľśı vlastnosti svg, které kartografa zaujmou patř́ı možnost transformaćı. svg nab́ıźı celou množinu geometrických transformaćı v rovině posunut́ı (translate), otočeńı (rotate), změnu měř́ıtka (scale) a zkoseńı (skewx, skewy). transformace je možné zapisovat pomoćı atributu transform s př́ıslušnými hodnotami (viz seznam transformaćı) nebo se pro zápis použ́ıvá transformačńı matice. transformace zjednoduš́ı problematiku přizp̊usobeńı souřadnicového systému mapy souřadnicovému systému použ́ıvanému v svg dokumentu. svg také umožňuje také zavedeńı vlastńıch délkových jednotek, které lze definovat pomoćı atributu viewbox. mezi podporované délkové jednotky patř́ı milimetry, centimetry, palce, pixely, procenta a daľśı. takovým zp̊usobem lze např́ıklad dokumentu o rozměru 400 x 400 pixel̊u, přǐradit souřadnice s-jtsk. daľśım užitečným prvkem je element . tento prvek umožňuje definováńı grafických entit, které je pak možné opakovaně umist’ovat na libovolná mı́sta svg dokumentu. v př́ıpadě tvorby map lze tuto vlastnost využ́ıt pro definováńı mapových značek. tyto značky je pak možné pomoćı transformačńıch funkćı nejen posunout na správné mı́sto, ale také natočit, př́ıpadně zmenšit nebo zvětšit. v svg je jednoduché definováńı knihoven mapových značek, které mohou být připojeny (vloženy) do r̊uzných svg dokument̊u, přičemž jejich vizualizačńı vlastnosti (barva, typ linie, pr̊uhlednost apod.) lze individualizovat prostřednictv́ım kaskádových styl̊u. v úvodńı pasáži svg dokumentu, která je označená tagy , lze kromě symbol̊u definovat také daľśı prvky jako např́ıklad vzory výplně, gradienty nebo typy čar. pro kartografy je užitečná také daľśı vlastnost, která dovoluje rozmı́stit text podle libovolné křivky, což lze využ́ıt např́ıklad při popisćıch pohoř́ı, vodńıch ploch nebo územńıch celk̊u. v sgml/xml dokumentech je možné použ́ıvat elementy z jiných sgml/xml formát̊u. k tomu se použ́ıvaj́ı tzv. jmenné prostory (namespaces). takovým zp̊usobem je možné propojit geinformatics fce ctu 2006 116 svg v kartografii i metadata popisuj́ıćı mapu. w3c doporučuje metadatové formáty na bázi sgml/xml rdf (resource description framework) nebo dublin core. tv̊urci internetových map a atlas̊u uvád́ı mnoho daľśıch d̊uvod̊u proč použ́ıvat technologii svg. jedná se např́ıklad o možnost vyhlazováńı grafických i textových prvk̊u (antialiasing) nebo možnosti exportu dat z jiných formát̊u jako jsou např́ıklad shapefile nebo gml. svg mapy je také možné generovat z databáźı. internet multimedia mapping v prostřed́ı internetu lze snadno zrealizovat propojeńı kartografického produktu s multimediálńımi prvky (zvukové soubory, grafické soubory, video apod.). internetová kartografie zkoumá možnosti obohaceńı map o nejr̊uzněǰśı multimediálńı prvky. svg v současnosti nab́ıźı dva prvky, které podporuj́ı propojeńı multimediálńıch prvk̊u s vektorovou grafikou. jedná se o možnost vložeńı rastrové grafiky. k tomu účelu slouž́ı element , který umožňuje připojeńı rastru. k dispozici jsou dva typy kompreśı rastrové grafiky: � bezztrátová komprese formát png (portable network graphics), � ztrátová komprese formát jpeg (joint photography expert group). druhou zaj́ımavou vlastnost́ı, která může přispět k podpoře multimedia mapping a také vést k nár̊ustu uživatel̊u svg map je využ́ıváńı smil animaćı. animace jsou d̊uležité nejen jako prostředek prostředek pro zatraktivněńı mapy a přitáhnut́ı pozornosti uživatele. pomoćı animovaných kartogramů nebo kartodiagramů lze velice snadno vizualizovat časové změny v geografickém prostoru. velké zlepšeńı v souvislosti s podporou multimédíı se očekává s nástupem nové verze svg 1.2, která by měla zlepšit integraci audio a video soubor̊u a také zavést možnosti streamováńı. internet mobile mapping mobilńı zař́ızeńı (mobilńı telefony, kapesńı poč́ıtače) jsou d́ıky své velikosti a snadné manipulaci č́ım dál častěji použ́ıvány pro navigačńı účely spojené se naváděńım pomoćı mapových produkt̊u. hlavńım problémem je redukce velikosti mapy pro zobrazeńı na malých displej́ıch a přenos pomoćı technologíı s malou kapacitou. tv̊urci svg kladou v současnosti velký d̊uraz na mobilńı aplikace, o čemž svědč́ı iniciativa mobile svg7. v konsorciu, které svg vyv́ıj́ı, aktivně pracuj́ı výrobci mobilńıch zař́ızeńı, jako jsou např́ıklad nokia, ericsson nebo sharp corporation. normu svg tiny také zařadilo konsorcium 3gpp8 (3rd generation partnership project) do svého 3gpp standardu pro třet́ı generaci mobilńıch telefon̊u jako povinný základ multimedia message service (mms). mobile svg je také určeno pro oblast zábavy a e-komerce. dále by na podkladě svg měly pracovat lokalizačńı a mapovaćı služby. uvažuje se i o tvorbě grafického uživatelského prostřed́ı (gui) pomoćı svg. 7 http://www.w3.org/tr/svgmobilereqs 8 http://www.3gpp.org/ geinformatics fce ctu 2006 117 http://www.w3.org/tr/svgmobilereqs http://www.3gpp.org/ svg v kartografii z hlediska aplikaćı je k dispozici programové vybaveńı firem nokia, csiro, esvg (embedded svg), bitflash, sharp, access nebo zoomon. jedná se např́ıklad o prohĺıžeče tinyline, zoomon svgt viewer nebo bitflash svgt viewer. samostatnou kapitolou je prohĺıžeč opera s integrovaným prohĺıžečem zoomon svgt viewer existuj́ıćı ve verzi pro mobilńı telefony, pda, smartphones, který umožňuje zobrazováńı svg i xhtml. kartografické služby určené pro mobilńı zař́ızeńı poskytuje např́ıklad firma vodafone nebo japonská telekomunikačńı společnost kddi, která také spravuje www stránky jamaps9. [qui2004] v oblasti internet mobile mapping se v posledńı době objevilo velké množstv́ı konkurenčńıch vektorových formát̊u. jedná se např́ıklad o ravegeo (od společnosti idevio), maptp (netsolut), slimmap a gfxfeaturemap (wayfinder systems ab). podrobněǰśı srovnáńı těchto formát̊u lze nalézt v publikaci [wal2003]. tvorba map v svg ačkoli se použ́ıváńı svg v digitálńı kartografii jev́ı jako velice perspektivńı, je nutné si uvědomit, že svg je pouze prostředkem pro tvorbu a distribuci map a nikoli komplexńı technologíı nebo dokonce programovým vybaveńım. kvalita výsledného kartografického produktu tedy nezáviśı v plné mı́̌re na vývoji svg, ale zčásti také na použ́ıvaném software pro tvorbu map a jejich zobrazeńı. existuj́ı čtyři základńı zp̊usoby tvorby map v svg: 1. pomoćı wysiwyg (what you see is what you get) editor̊u, 2. export z jiného formátu, 3. generováńı z dat ve formátu xml pomoćı xslt transformaćı, 4. generováńı z databáze prostřednictv́ım skriptovaćıch jazyk̊u. wysiwyg editory tvorba map pomoćı wysiwyg editor̊u je velice snadná a pohodlná. z tohoto d̊uvodu je tato metoda vhodná i pro kartografické začátečńıky a laiky, kterým umožńı poměrně jednoduše publikovat na webu vektorové mapy. nav́ıc existuje velké množstv́ı editor̊u, které jsou distribuovány na platformě open-source (inkscape, glips graffiti, sodipodi). na druhou stranu tento postup bývá často zdrojem elementárńıch chyb a nepřesnost́ı, které jsou často zp̊usobeny ignorováńım základńıch kartografických pravidel. mapy vytvořené pomoćı wysiwyg editor̊u maj́ı většinou pouze grafickou přesnost správné souřadnice je většinou nutné opravit “ručně” ve zdrojovém kódu mapy. dvojsečnou zbraň představuje také jazyk java, v němž je většina software pro práci s svg vytvořena. na jednu stranu jsou tyto programy k dispozici pro všechny hardwarové platformy. problémem z̊ustávaj́ı rozsáhlé soubory, s nimiž se v oblasti digitálńı kartografie můžeme často setkat. přičemž se jedná předevš́ım o velikost z hlediska počtu vykreslovaných element̊u. programy v interpretovém jazyce java často nedokáž́ı tak rozsáhlé soubory zpracovat docháźı 9 http://www.jamaps.org/ geinformatics fce ctu 2006 118 http://www.jamaps.org/ svg v kartografii ke značnému zpomaleńı aplikace nebo dokonce k jej́ımu zastaveńı v d̊usledku nedostatku paměti. export dat do svg některé gis aplikace, jako např́ıklad arcgis, mapinfo nebo open-source openjump, umožňuj́ı export dat do formátu svg. výhodou tohoto př́ıstupu je možnost tvorby mapy v prostřed́ı, které je současným standardem pro tvorbu digitálńıch map. daľśı přednost́ı tohoto př́ıstupu je existence velkého množstv́ı dat ve formátech pro gis software a také mnoha kartografických nástroj̊u integrovaných do gis. data převedená z gis do svg neobsahuj́ı některé typy chyb, např́ıklad překryty značek nebo popisk̊u. proto tato cesta představuje pro uživatele geografických informačńıch systémů nejjednodušš́ı prostředek k tvorbě svg map. mezi problematické stránky tohoto př́ıstupu patř́ı nemožnost ovlivněńı výsledného svg, které často nesplňuje standardy. vyexportované soubory bývaj́ı většinou př́ılǐs rozsáhlé z d̊uvod̊u připojeńı nadbytečného množstv́ı xml prezentačńıch atribut̊u. některé programy (např́ıklad openjump) neexportuj́ı do svg data, z nichž je mapa generována, ale pouze výslednou mapu nejsou tedy zachovány p̊uvodńı souřadnice. podobně jako v předchoźı sekci je uživatel nucen použ́ıt “ručńı” editaci souboru, pokud bude cht́ıt mapu obohatit o r̊uzné interaktivńı prvky. xslt transformace pro tvorbu map z dat v xml formátu (nejčastěji se použ́ıvá formát gml) prostřednictv́ım xslt transformaćı jsou nutné netriviálńı znalosti z oblasti xml a xslt. také zápis xslt pravidel lze většinou realizovat jen pomoćı textových (programátorských) editor̊u, které přes snahu svých tv̊urc̊u nemohou z hlediska komfortu předstihnout své wysiwyg konkurenty. proto je tvorba xslt map pomoćı xslt trasformaćı často opomı́jenou metodou. na druhou stranu je třeba poznamenat, že d́ıky př́ıbuznosti všech xml formát̊u jsou postupy pro práci s nimi téměř identické proto lze označit jako výhodu fakt, že formáty pro zdrojová data, transformačńı pravidla a také výsledná mapa jsou založeny na bázi sgml/xml. tv̊urce mapy i jej́ı uživatel mohou využ́ıvat všech přednost́ı formátu xml, jako je např́ıklad čitelnost, flexibilita apod. také je možné opakované využ́ıváńı již vytvořeného stylu a předevš́ım přizp̊usobeńı výstupńı mapy potřebám uživatel̊u, včetně změny symbolizace nebo velikosti soubor̊u, kdy je možné odstraněńı nadbytečných element̊u a atribut̊u výsledné svg. tato vlastnost je d̊uležitá při přenosu a zobrazováńı map prostřednictv́ım málo výkonných mobilńıch zař́ızeńı. přednost́ı je také široká nab́ıdka kvalitńıho programového vybaveńı (editory, konvertory, transformačńı procesory), přičemž většina je š́ı̌rena pod některou z oteřených licenćı. xslt procesory, včetně nejpouž́ıvaněǰśıho procesoru saxon, jsou většinou vytvořeny v jazyce java, proto se můžeme setkat s obt́ıžemi popisovanými v odstavci 3.1. geinformatics fce ctu 2006 119 svg v kartografii generováńı svg map z databáze z pohledu budoucnosti se tato metoda jev́ı jako nejperspektivněǰśı. podobně jako v předchoźım odstavci lze výslednou mapu téměř dokonale přizp̊usobit potřebám a požadavk̊um uživatele. databáze, serverové technologie a skriptovaćı jazyky umožňuj́ı bezpečné uchováváńı a spravováńı velkého množstv́ı geografických dat, jejich transformaci do svg a také publikováńı na internetu. nav́ıc veškeré komponenty existuj́ı open-source nebo podobných licenčńıch variantách: � serverové technologie (apache), � skriptovaćı jazyky (python, perl, php), � databázové systémy (postgresql). generováńı svg map z relačńı databáze využ́ıvá např́ıklad projekt tirol atlas10, který vytvář́ı katedra geografie univerzity v innsburcku. jednotlivé tematické mapy a grafy atlasu jsou vytvářeny skripty v jazyce perl z databáze postgresql/postgis. z daľśıch technologíı autoři použili html, css a javascript. závěr technologie svg se v současné kartografii prosazuje ve stále větš́ı mı́̌re. ačkoli se svg hojně využ́ıvá v mnoha oblastech, např́ıklad v prostřed́ı komerčńı grafiky, e-bussinesu nebo zábavńıho pr̊umyslu, na celosvětové konferenci svg open 200511 v nizozemském eshende byla téměř jedna čtvrtina všech př́ıspěvk̊u věnována problematice geografických informačńıch technologíı, předevš́ım tvorbě map. nelze ovšem považovat svg za jedinou samospasitelnou technologii, která vyřeš́ı veškeré problémy, se kterými se současná kartografie setkává. svg, stejně jako ostatńı technologie, má svoje přednosti, ale i nevýhody. mezi klady svg patř́ı univerzálnost, nezávislost, otevřenost a komunikativnost. naopak k nedostatk̊um svg se nejčastěji řad́ı nedostatečná podpora ze strany výrobc̊u software a také fakt, že tato podpora neńı standardizována, nebot’ producenti programového vybaveńı často nerespektuj́ı všechna doporučeńı ze strany standard̊u w3c. kartografové by přiv́ıtali obohaceńı svg o podporu topologie, souřadnicových systémů a trojrozměrné grafiky, předevš́ım výškových systémů (v současnosti se nadmořské výšky připojuj́ı k svg soubor̊um jako metadata) [ack2001]. nekomerčńı licence a také široká komunita pracuj́ıćı na vývoji nových verźı svg a také na software pracuj́ıćı s svg zaručuje velice rychlý vývoj tohoto formátu. např́ıklad ještě v polovině roku 2005 neexistovala v žádné aplikaci podpora tzv. alternativńıch styl̊u tedy možnost volby styl̊u v prostřed́ı prohĺıžeče nebo pomoćı skript̊u [čer2005]. alternativńı styly nepodporoval žádný ze speciálńıch prohĺıžeč̊u ani prohĺıžeče www stránek s př́ımou podporou svg (např́ıklad opera použ́ıvá pouze variantu svg tiny, která neobsahuje kaskádové styly a vizualizačńıch vlastnosti jsou definovány pomoćı xml prezentačńıch atribut̊u). kvalitńı podpora kaskádových styl̊u, včetně alternativńıch, přǐsla až s programem mozilla firefox 1.5. alternativńı styly mohou být využ́ıvány pro změny barevných stupnic tematických map nebo 10 http://tirolatlas.uibk.ac.at/ 11 http://www.svgopen.org/2005/index.html geinformatics fce ctu 2006 120 http://tirolatlas.uibk.ac.at/ http://www.svgopen.org/2005/index.html svg v kartografii některých kartografických technik (např. typy graf̊u v kartodiagramech). nav́ıc v kvalitńım prohĺıžeči, který respektuje zásady w3c, neńı nutné ke změně styl̊u použ́ıvat skripty, což zjednoduš́ı a předevš́ım zrychĺı práci z rozsáhlými svg soubory. určité zlepšeńı přinese nová verze svg 1.2, od které se očekávaj́ı následuj́ıćı změny: lepš́ı podpora multimédíı (audio i video, včetně streamováńı), zkvalitněńı interaktivity, zalamováńı text̊u, podpora xbl (xml binding language) a daľśı. tyto nové vlastnosti jistě přinesou nové prvky do digitálńıch map, které budou poskytovat mnohem v́ıce informaćı nav́ıc přitažlivěǰśı a zaj́ımavěǰśı formou. reference [ack2001] ackland, ron, cox, simon. markup mapping [online]. south pacific science press international p/l, 2.8.2001. dostupné z: online12 [čer2005] čerba, otakar. css v digitálńı kartografii. in 16. kartografické konference. brno: univerzita obrany, 2005. [čer2006] čerba, otakar. cartographic e-documents & sgml/xml [online]. in international symposium gis... ostrava 2006. ostrava: vysoká škola báňská technická univerzita, 2006. dostupné z: online13 [eis2002] eisenberg, j. david. svg essentials. 1. vyd. sebastopol: o’reilly, 2002. 364 s. isbn 0-596-00223-8. [hel2004] held, georg ,ullrich, torsten, neumann, andreas, winter, andré m. comparing .swf (shockwave flash) and .svg (scalable vector graphics) file format specifications [online]. 29.3.2004. dostupné z: online14 [neu2003] neumann, andreas ,winter, andréas m. vector-based web cartography: enabler svg [online]. 2003. dostupné z: online15 [neu2005] neumann, andreas. use of svg and ecmascript technology for e-learning purposes. in isprs workshop commissions. potsdam, 2005. [pet2003] peterson, michael p. maps and the internet. 1. vyd. oxford: elsevier; international cartographic association (ica), 2003. 451 s. isbn 0-08-044201-3. [pet2005] peterson, michael p. a decade of maps and the internet. in the 22th international cartographic conference. mapping approaches into a changing world. a coruña: international cartographic association / association cartographique internationale, 2005. isbn 0-958-46093-0. [qui2004] quint, antoine. mobile svg [online]. xml.com o’reilly media inc., 2004. dostupné z: online16 12 http:://www.positionmag.com.au/gu/content/2001/gu47/gu47 feature.html 13 ttp:://gis.vsb.cz/gisengl/conferences/gis ova/gis ova 2006/proceedings/referaty/default.htm 14 http:://www.carto.net/papers/svg/comparison flash svg 15 http:://www.carto.net/papers/svg/index e.shtml 16 http:://www.xml.com/pub/a/2004/08/18/sacre.html geinformatics fce ctu 2006 121 http:://www.positionmag.com.au/gu/content/2001/gu47/gu47_feature.html http:://gis.vsb.cz/gisengl/conferences/gis_ova/gis_ova_2006/proceedings/referaty/default.htm http:://www.carto.net/papers/svg/comparison_flash_svg http:://www.carto.net/papers/svg/index_e.shtml http:://www.xml.com/pub/a/2004/08/18/sacre.html svg v kartografii [rei2003] reichenbacher, t. adaptive methods for mobile cartography. in 21. international cartographic conference (icc). cartographic reanaissance. durban: international cartographic association / association cartographique internationale, 2003. isbn 0-958-46093-0. [sep2004] sepesi, greg. analysis vs. documentation. a comparsion of complementary geographical tasks [online]. 2004. dostupné z: online17 [sim2004] simpson, john e. mapping and markup [online]. xml.com o’reilly media inc., 2004. dostupné z: online18 [wal2003] waldén, martin. towards the integration of vector map graphics in mobile environments [online]. lund: department of communication systems, lund institute of technology, lund university, 2003. dostupné z: online19 17 http:://www.carto.net/papers/greg sepesi/greg sepesi complementary geographical tasks 200401.pdf 18 http:://www.xml.com/pub/a/2004/11/24/tourist.html 19 http:://serg.telecom.lth.se/education/master theses/docs/44 rep.walden.pdf geinformatics fce ctu 2006 122 http:://www.carto.net/papers/greg_sepesi/greg_sepesi_complementary_geographical_tasks_200401.pdf http:://www.xml.com/pub/a/2004/11/24/tourist.html http:://serg.telecom.lth.se/education/master_theses/docs/44_rep.walden.pdf perception of colour scales used in thematic cartography by young people aged 15-17 barbora musilová student of geomatics programme faculty of applied sciences, university of west bohemia in pilsen baramusilova@gmail.com abstract there are many different types of datasets represented by maps in thematic cartography. it is possible to represent features and phenomenon referencing to area as well as to point. furthermore, it is possible to represent qualitative data as well as quantitative data. there are many different ways to represent them. one of the most important and most used cartographic symbols of lettering is colour, resp. colour scales, which are chosen according to shown data. this article is focused on the perception of colour scales in relation to character of data. for experiment, a new questionnaire was created on the basis of colour scales classification. this questionnaire was posed to the students of years 1 and 2 of high school in order to find out how they perceive the colour scales. the study analyzes three main questions; whether students differentiate qualitative and quantitative datasets and corresponding colour scales, whether they prefer representing of features by context colours or by the colour they like more and whether they are familiar with principles of diverging colour scales. when processing it, the correlation between certain agents and the answers was established. keywords: thematic cartography, perception of colour scales 1. introduction one comes across a map, a scheme or a topological sketch every day. we need to understand gps (global positioning system) navigation system while driving; we should be able to get to some places according to a transit scheme. choropleth or other thematic maps are often used in newspapers to accompany an article and we should be able to read them correctly and easily. we have been learning these skills since childhood, when using maps during school lessons, to illustrate historical or geographical phenomena and help to learn more about the world and its relationships. the more one is pressed to use map, the more ability to read maps quickly and correctly he earns. there are many different ways to show required phenomena on a map. we can choose from several cartographic methods depending on different characteristics of data (various methods are widely described in literature about making maps and cartography in general, e.g. (brewer, 2005) or (voženílek, 2004)). to feature the particular phenomenon, a map symbol is used which has specific visual variables (as colour, shape, position. . . )(bertin, 1981) in conformity with shown data. the values of attribute that are used depend mostly on the cartographer. however, we should consider some facts, conventions and other aspects to be considered (as purpose of the map, available software, financial resources, or focus group and geoinformatics fce ctu 10, 2013 27 musilová, b.: perception of colour scales used in thematic cartography their cartographical literacy). according to several experiments, it is easier to read a map when it is coloured (e.g., brewer, 1997). people can differentiate area symbols better when colour codes are used than when there are used codes based on texture (see philips, 1980). moreover, colour maps generally attract more users than monochromatic maps do. the fact whether one likes the map or not, depends also on the harmony of used colours. the definition and characterization of colours harmony is proposed in the article “colours harmony in cartography” (christophe, 2011). there also exists a possibility to use the same colours on maps as those used on some famous paintings (see cartwright, 2009). colour is one of cartographic symbols of expression; it carries information about represented phenomenon and, based on the characteristic of the phenomenon, proper colour is chosen. there exist some conventions of using particular colour scales for showing certain phenomena (e.g. the hypsometric scale to show heights, blue-red scale for temperature); the research aims to find out whether some of these habitual manners are understandable for common map users. a group of teenagers was chosen as a subject of the experiment (reasons for the choice and profile of the group are specified in the next chapter). for the experiment, three main classifications of colour scales were specified. people associate some phenomena with particular colours unconsciously, like blue for issues related to water, green for nature, red for danger etc (oravcová, 2009). in the experiment, some of these clearly associable phenomena were chosen as well as some phenomena with no specific colour to associate with to find out how strongly respondents sense the relation between feature and colour. like data can that be, generally speaking, split into two groups: qualitative and quantitative; the colours used to represent them can be split into the same two groups. the experiment was focused on determining whether respondents are able to perceive the difference between these two groups of data and assorted colour scales. the last phenomenon examined in this experiment was the respondents’ ability to work with divergent colour scales. the first part of the article describes the methodology of experiment, the group of subjects, expected results and the procedure of research. the second (main) part deals with processing of received answers according to type of classification of colours, comparing them with expected results and checking possible dependences. 2. the methodology of the experiment during schooldays, children make their first experience with cartographic works and their future attitude towards maps may be strongly influenced by the maps they work with in school. books of maps and school atlases make a significant number of publications of cartographic publishers. several comparisons of certain school atlases have been made; they were compared in relation to the aspect of development (e.g. široká, 2009), also the content of maps used in schoolbooks and school atlases was analyzed (e.g. šákrová, 2010). on the other hand, there is considerably less research on the perception of school maps by pupils than researches only comparing maps. that fact led to realization of this research. geoinformatics fce ctu 10, 2013 28 musilová, b.: perception of colour scales used in thematic cartography a) the subject of the experiment while finding proper group of subjects, a few requirements were demanded. for experiment, 50-100 respondents were needed in approximately the same age with at least basic knowledge of understanding maps, but not more educated in this area than their peers. due to the requirement of self-sufficiency, the age limit was set to 15 as the minimum age according to previous experiences with youth. due to possibility of collective presentation and ability of fulfilling upper written requirements, four school classes were chosen. there were 89 students from 1st and 2nd year of grammar school (gymnasium) distinguished by the type of grammar school; four-year and eight-year type, all of them with normal colour vision. type of gymnasium eight-year four-year total year of school gymnasium gymnasium 1st year 21 23 44 2nd year 22 23 45 total 43 46 89 table 1: profile of group of the respondents b) hypothesis as mentioned above, while forming hypotheses to be verified, three main characteristics of colour maps were reflected; showing quantitative and qualitative data, using contextual colours and understanding divergent colour scales. assumed results of the experiment were summed up in general hypotheses: • only insignificant number of respondents choose different colour then the context one for showing phenomenon with common colour association • respondents choose colour which they like most for showing phenomenon without clear colour association • almost all respondents choose assorted colour scales for showing quantitative, resp. qualitative data • more than half of respondents can work with divergent colour scale for processing, general hypotheses with specific numbers for each question of the questionnaire were specified. 2.1. making of questionnaire the experiment was carried out as a questionnaire. each question included a title of map (information about what should be represented on the map) and several (from 2 up to 6) colour variants of the map (see fig. 1 or further fig. 2). the overview of all selective questions and used colour scales for each of them is represented further, in table 2. respondents should choose the variant, which suited the title best. 17 selective questions and 7 additional questions were made to find out whether respondents understood the maps. the method of choropleth maps was mostly used. geoinformatics fce ctu 10, 2013 29 musilová, b.: perception of colour scales used in thematic cartography the highlands region was chosen as a reference area for the most of questions because of its regular shape. for some questions, the map of the czech republic was used as well. data sets had been modified, so that the shown phenomenon would be mapped by particular colour scales more easily (for example; for the map showing the most common surname in the area, at least one surname was required to occur in two or more different areas). figure 1: monthly average wage in sub regions of the highlands region. used colour scales: a – qualitative (improper) scale, b – quantitative (proper) scale, c – quantitative (proper) scale with colour context 3. procedure considering the number of respondents, length of the questionnaire and possibility to present the stimuli to each class separately, the slide show was chosen as the best appropriate form of presentation. questions were put in such an order, that related questions were isolated, and every question was numbered. colour variants were marked by letters. the slide show included 3 extra slides before the actual questions, which served as an introduction to the theory of colour scales, a slide with instructions for questionnaire and a slide with one question as an example. participants received a paper form as well, with numbers of questions and appropriate letters to check in chosen colour variant of map. under every answer a free space was left for a brief substantiation. the experiment took place in two consecutive days, each day with two classes. the presentation took approximately 45 minutes, the questionnaire itself 35 minutes. participants were told to choose the colour variant which suits the particular title the best, without overgeoinformatics fce ctu 10, 2013 30 musilová, b.: perception of colour scales used in thematic cartography sophisticated thoughts. they were also asked to fill in their forms on their own, without any discussions with their schoolmates. 4. processing during the processing, completed forms were sorted by school classes. the answers were firstly processed for the whole group of participants (for all classes together) to get general results and then for two pairs of classes separated by age, by the day of the presentation or by the type of a grammar school in order to find eventual correlations. table 2 (resp. tables 2a and 2b) shows the relative number of votes for each colour variant of map. every question is indicated by number and a shortened title of a map. two deductions result from both the tables obviously; most of the tested students can differentiate quantitative and qualitative character of data and assign them an appropriate colour scheme. also, when the prime context colour variant was available, students chose it more often than the alternative one. for learning whether students understand divergent scales additional questions were mostly applied, and these are processed further in this chapter. colour scale variant → question ↓ qualitative scale quantitative scale w it h co lo ur co nt ex t w it ho ut co lo ur co nt ex t al te rn at iv e co lo ur 1. male and female rate 11 (<10) 80 (>50) 9 – 3. the most common surname 18 27 – 29 26 5. soil usage 3 67 9 21 10. results of traffic collisions 23 (<1) 35 – 8 34 13. local elections 30 62 (>80) 8 – table 2a. results of selective questions – qualitative data [%] colour scale variant → question ↓ qualitative scale quantitative scale w it h co lo ur co nt ex t w it ho ut co lo ur co nt ex t al te rn at iv e co lo ur di ve rg in g sc al e 2. average wage 49 29 – – 22 (<2) 6. density of suicides 58(<10) 16 5 | 21 (<10) – 8. population growth – 66 – 11 23 (<1) 16. forestation 52 (>60) 2 – 5 39 (<40) (cc) 2 (<1) 19. density of open swimming pools 83 (>90) 3 14 – – 23. population – 6 14 12 (<5) 6 (<1) 24. mortality on roads 74 10 (<1) 16 – – 27. recreation facilities 28 (>50) 36 11 (20) – 25 (<5) 29. population growth – 63 – 16 (>60) 21 (<1) 31. forestation 75 – – 25 table 2b. results of selective questions – quantitative data [%] the expected values are stated in the parentheses (where “<” resp. “>” means “less than” resp. “more than”). sign “-” means that the scale described in the column was not created for the question. the coloured cells contain the most frequent answer. some questions contain more variants of context colour, these are separated as “with colour context” and “alternative” geoinformatics fce ctu 10, 2013 31 musilová, b.: perception of colour scales used in thematic cartography (e.g., question with map of density of suicides contains three context variants: grey, red, and ruby). letters “cc” in question 16 mean qualitative (improper) scale with colour context. a) results qualitative/quantitative data ability of students to identify difference between representation of quantitative and qualitative data was tested by selective questions only. let us see the table 2 once more. despite the fact that the suitable colour variant was the most frequently chosen in each question, the second one was almost every time the improper one (qualitative scale for quantitative data and contrariwise). statistic test (t-test) proved that the improper colour variant is chosen by 20 % of students. table 3 illustrates a comparison of expected and actual numbers of students, who chose the improper colour variant. question 1. m al e an d fe m al e ra te 2. av er ag e w ag e 3. th e m os t co m m on su rn am e 5. so il us ag e 8. po pu la ti on gr ow th 10 . re su lt s of tr affi c co lli si on s 13 . lo ca le le ct io ns 15 . ra te of fo re ig ne rs 16 . fo re st at io n 23 . po pu la ti on 27 . re cr ea ti on fa ci lit ie s 29 . po pu la ti on gr ow th expected numbers <1 <5 <1 <1 <1 <1 <1 <5 real numbers 9 22 27 3 23 23 30 10 41 6 25 21 table 3: comparing of expected and real number of respondents who chose improper colour variant [%] there appear some questions with strongly different results than 20 %. questions 1 and 5 present qualitative data with strong colour context, but context colours are not used for showing with the quantitative colour scale. students probably chose the appropriate colour scale because of the context colour, not thinking about qualitative or quantitative character. the influence of colour is obvious in question 16 (fig. 2), where the context colour was used in qualitative and also quantitative version. the statistic test showed that the number of votes was congruent for both variants in that question. if respondents chose the improper colour variant, they often explained their choice by saying that the chosen colours are nice, pretty, or well contrasting. colour context we can clearly see in the table 2, that in case that one of the colours variant of map was made with context colours, participants chose this one the most frequent. moreover, when using qualitative and quantitative colour scales, both in context colours, they were chosen with the same frequency. in most cases, more than half of respondents chose the context colour variant or the alternative context colour variant. geoinformatics fce ctu 10, 2013 32 musilová, b.: perception of colour scales used in thematic cartography figure 2: forestation in sub regions of the highlands region. used colour scales: a – quantitative (suitable) context scale, b – qualitative (improper) scale, c – qualitative context scale, d – quantitative scale, e – quantitative diverging scale, f – grey scale for detailed processing the t-test was applied. table 4 shows the comparison of expected values and the values that i got. question 1. male and 2. 5. soil usage 6. number of suicides 10. results of 13. local 16. → female rate average traffic collisions elections forestation prim. alt. wage prim. alt. prim. alt1. alt2. prim. alt. expected >50 <10 – – 20 10 10 – – – >80 >60 values get values 80 11 49 21 67 58 21 5 35 8 62 91 question 19. open 24. mortality on 27. → swimming roads recreation pools facilities prim. alt. prim. alt. prim. alt. expected >90 – – – >70 20 values get values 84 14 74 16 28 11 table 4: comparison of expected and received number of answers for colour context map variants [%] when more associative variants for map occur, they are separated as prim. (primary) and alt. (alternative) (e.g. mortality on roads was mapped by grey, red and blue quantitative scales; grey represents primary context variant, red represents alternative context variant, and blue has no colour association, therefore it does not occur in the table). geoinformatics fce ctu 10, 2013 33 musilová, b.: perception of colour scales used in thematic cartography sequential/divergent scale in the questionnaire, a few selective questions aimed to find out, whether students would assign divergent colour schemes to data, which could be symbolized by diverging scale. a triplet of colour variants of map showing population growth was used in two questions; in the first one only with title, in the second one with a title and a legend referencing to zerocrossing data. in both questions, more than 60 % of students chose the sequential scale as more suitable, while the divergent scale was chosen by 11 % of respondents in the question only with title, and by 16 % in the question with a legend. statistical test did not approve difference between these two results. the questionnaire also included 3 additional questions to find out if students can work with the diverging scale when used on a map. in the first one, a forestation was mapped by orangeyellow-lime green diverging scale. students should choose the region with the biggest rate of forestation. as expected, most of them chose the region coloured by saturated green (77 %), nevertheless almost one fifth of them chose an orange coloured region. another question aimed to recognize ability of students to identify a border on a scale, which represents breaking value (usually zero or mean). diverging colour scale was drawn with 5 steps; 3 of them were red, 2 of them were blue. number values on both ends of scale indicated its range (negative number on left end, positive number on right end). respondents should assign values to borders of colour steps. it was expected that at least 20 % of them would have assigned the value of zero to the border between light blue and light red, however only 5 % of students assigned the zero to the border. this result could be caused by the fact that students expected sections with the same length thus they divided the interval that way. in last additional question, a map of average temperature was made in 3 colour variants; one with symmetric divergent scale, one with asymmetric divergent scale (both divergent scales were blue-beige-red) and one with yellow-red sequential scale. students should have assigned temperature interval to each of these variants. it was observed whether the interval crosses zero. for variant with the symmetric scale 87 % of students wrote down an interval crossing zero, 60 % of respondents wrote it down for the asymmetric variant and 9_ of respondents for the sequential variant 9 % of respondents. correlation while filling the questionnaire in, there surely occurred some agents which influenced respondents´ answers. some of them, as specific perception of colours by each person, are not detectable, while some external agents could be found. the experiment took place in two days with different weather (cloudy the first day and sunny the second day); on each day the questionnaire was presented to a class of students of 1st year and a class of students of the 2nd year of study. from these classes one class was from eight-year gymnasium and one from four-year gymnasium, so all agents could have been tested without chance of mistaking them. for processing, chi-squared test of independence and paired t-test were used. correlation between answer and type of study (eight-year or four-year gymnasium) was located in questions about quantitative and qualitative data and their representation. there are shown relative numbers of students who chose improper colour variant in table 5. statistical test proves a significant difference in answers between the two varying types of study. thus students of the four-year gymnasium generally choose improper colour variant more often geoinformatics fce ctu 10, 2013 34 musilová, b.: perception of colour scales used in thematic cartography question → students of: 1. m al e an d fe m al e ra te 2. av er ag e w ag e 3. th e m os t co m m on su rn am e 5. so il us ag e 8. po pu la ti on gr ow th 10 . re su lt s of tr affi c co lli si on s 13 . lo ca le le ct io ns 15 . ra te of fo re ig ne rs 16 . fo re st at io n 23 . po pu la ti on 27 . re cr ea ti on fa ci lit ie s 29 . po pu la ti on gr ow th eight-year gymnasium 2 7 46 0 16 21 26 7 37 5 27 14 four-year gymnasium 15 37 43 7 28 26 35 13 44 7 23 27 table 5: numbers of students who chose improper colour variant according to the type of study [%] than students of the eight-year gymnasium. results of one of the questions also indicate correlation between age (respectively year of school) and perception of green colour as context colour of money. while 59 % of students of 2nd year of study chose green sequential scale for showing monthly average wage, the green variant was chosen only by 45 % of younger students. the answers by the undergraduate students, who chose green by 75 %, confirmed theory that one earns association of greenmoney. during the processing, correlations were also detected between weather and answers for some questions. the reason mostly consists in the way the questionnaire was presented, when bright colours stayed expressive on slide show on both days, while less saturated colours lost their hue on sunny day. b) comments of results at the beginning of the experiment general hypotheses were determined in order to be compared with the observed results. in spite of the fact that the tests of independence proved correlation between certain agents and answers in some questions, outcomes of these questions were included in the processing. that could be done because of the equal distribution of these aspects. h1: only insignificant number of respondents chooses different colour than the context one for showing phenomenon with common colour association. this hypothesis was dismissed. when the mapped phenomenon had colour context, at least 10 % of students chose a different colour variant. h2: respondents choose colour which they like the most for showing phenomenon without clear colour association. geoinformatics fce ctu 10, 2013 35 musilová, b.: perception of colour scales used in thematic cartography this hypothesis has not been proved. students often gave reason for their choice by other points of view (colour contrast, suitability of sequential scale for rising phenomenon). from 14 % to 52 % of respondents who wrote the reason for choosing that specific answer advanced their liking for the colours. h3: almost all respondents choose assorted colour scales for showing quantitative, resp. qualitative data. this hypothesis was accepted. suitable colour scale was preferred by almost 80 % of respondents. the choice also depended on the type of study, when students from 8-year gymnasium choose the suitable colour variant relatively more often than students of 4-year gymnasium. h4: more than half of respondents can work with divergent colour scale. this hypothesis was neither accepted nor dismissed. more than 75 % of students could correctly compare colours on diverging scale (while on converging scale 85 % of them qualify the appropriate relation). however, only 5 % were able to correctly assign breaking value on diverging colour scale with different long intervals. 5. conclusion the experiment aimed to determine whether the colour scales commonly used on thematic maps are understandable for students of high school. the research analyzed, whether they discern differences between certain characteristics of data (as qualitative and quantitative data), whether they can assign a corresponding colour scale to the data and whether they can work with maps with various colour scales. variances between colour scales as well as between different characteristics of data were expected to be easily found by the majority of students. the processing of research shows that most of the students can clearly differentiate qualitative and quantitative data and they can assign an appropriate colour scale to them. in addition, the assignment of appropriate colour scale is easier for the students of eight-year gymnasium than for the students of four-year gymnasium. the ability of intuitive differentiating qualitative and quantitative data could cause eventual problems with using rainbow scale to present ordered data, as described in (borland, 2007). it was also detected that the eventual colour context of the scale influences the chosen variant strongly. in general, the used colour and its colour context affect the perception of the mapped feature significantly. this illustrates the frequent misinterpretation of hypsometric scale, whose tints students confuse with information about vegetation or climate (patterson, 2011). understanding of diverging colour scales was neither approved nor dismissed. propriety and usability of diverging colour scales are discussed in several studies with different results. lewandowsky suggests using bipolar scales with caution (lewandowsky, 1993); also ware advises rather to use scale increasing monotonically in luminance to obtain accurate reading (ware, 1988). on the other hand, moreland prefers the diverging scales for their logical structure (moreland, 2009) and maceachren mentions a study that proves the perception of order in the blue-red diverging scale (maceachren, 2004). this paper confirms the necessity geoinformatics fce ctu 10, 2013 36 musilová, b.: perception of colour scales used in thematic cartography of proper consideration to use the diverging scale, since it causes a misleading rather than the converging scale, but also indicates that most of students are able to understand and correctly compare ending colours. the experiment offers several possibilities for other research. differences between answers of students of gymnasiums and students of other high schools (e.g. technical colleges) could be determined. the research can be extended to get generalized information about the perception of colour scales by pupils. these results could be compared with maps used in school atlases and other schoolbooks. the questionnaire could also be submitted to other students of 1st and 2nd year of gymnasium and compare results of current and previous research. references [1] bertin, jacques. graphics and graphic information-processing. berlin: w. de gruyter, 1981. isbn 3110088681. [2] brewer, cynthia a. designing better maps: a guide for gis users. california: esri press, 2005. isbn i-58948-089-9. [3] brewer, cynthia a., hermann, douglas, maceachren, alan m., pickle linda w. mapping mortality: evaluating color schemes for choropleth maps. annals of the association of american geographers. 1997, vol.87, no.3, p. 411-438. [4] borland, david, taylor, russel m. rainbow color map (still) considered harmful. computer graphics and applications, ieee. 2007, vol.27, no.2, p.14-17. on-line available at: http://www.jwave.vt.edu/~{}rkriz/projects/create_color_table/color_ 07.pdf [5] cartwrighter, william, gartner, georg, lehn, antje. cartography and art. berlin heidelberg: springer, 2009. isbn 978-3-540-68567-8. [6] christophe s., zanin c., roussaffa h. colours harmony in cartography. proceedings of the 25th international cartographic conference [online]. paris: 2011. id: co084. isbn: 978-1-907075-05-6. on-line available at: http://icaci.org/files/documents/icc_proceedings/icc2011/oral% 20presentations%20pdf/b1-graphical%20semiology,%20visual%20variables/ co-084.pdf [7] lewandowsky, stephan, herrmann, douglas j., behrens, john t., li, shuchen, pickle, linda, jobe, jared b. perception of clusters in statistical maps. applied cognitive psychology. 1993, vol. 7, is. 6, p. 533–551. [8] maceachren, alan m. how maps work: representation, visualization, and design. new york: guilford press, 2004. isbn 1-57230-040-x. [9] moreland, kenneth. diverging color maps for scienti[fb01?]c visualization (expanded). not-expanded version published in: proceedings of the 5th international symposium on visual computing. 2009. on-line available at: http://www.sandia.gov/~{}kmorel/documents/colormaps/ geoinformatics fce ctu 10, 2013 37 http://www.jwave.vt.edu/~{}rkriz/projects/create_color_table/color_07.pdf http://www.jwave.vt.edu/~{}rkriz/projects/create_color_table/color_07.pdf http://icaci.org/files/documents/icc_proceedings/icc2011/oral%20presentations%20pdf/b1-graphical%20semiology,%20visual%20variables/co-084.pdf http://icaci.org/files/documents/icc_proceedings/icc2011/oral%20presentations%20pdf/b1-graphical%20semiology,%20visual%20variables/co-084.pdf http://icaci.org/files/documents/icc_proceedings/icc2011/oral%20presentations%20pdf/b1-graphical%20semiology,%20visual%20variables/co-084.pdf http://www.sandia.gov/~{}kmorel/documents/colormaps/ musilová, b.: perception of colour scales used in thematic cartography [10] oravcová, jitka. colour importance in visualization of information. journal of technology and information education [online]. 2009, vol.1, no.2, p.24-32. issn 1803-537x. on-line available at: http://www.jtie.upol.cz/clanky_2_2009/oravcova.pdf (in slovak) [11] patterson, tom, jenny, bernhard. the development and rationale of crossblended hypsometric tints. cartographic perspectives. 2011, no. 69. p.31-46. [12] phillips, rachard j., noyes, liza. a comparison of color and visual texture as codes for use as area symbols on thematic maps. ergonomics [online]. taylor & francis, 1980, vol.23, no.12, p.1117-1128. issn 0014-0139. on-line available at: http://www.richardphillips.org.uk/maps/map1980.pdf [13] šákrová, michaela. analysis of contents of teaching maps of czech geography textbooks and school atlases. prague: charles university, faculty of science, 2010. bachelor thesis. on-line available at: https://is.cuni.cz/webapps/uksessionc6acffbaf6b62adfcff0f049f7b7b649/zzp/ download/130000895/?back_id=3 (in czech) [14] široká, silvie. the evolution of the school geographic atlases, their conceptions, themes and scales. brno: masaryk universtiy, faculty of education, 2009. bachelor thesis. on-line available at: http://is.muni.cz/th/209487/pedf_b/bakalarska_prace. pdf (in czech) [15] voženílek, vít. aplikovaná kartografie i. – tematické mapy. 2. vyd. olomouc: palacký university, 2004. isbn 80-244-0270-x. (in czech) [16] ware, colin. color sequences for univariate maps: theory, experiments, and principles. ieee computer graphics and applications. 1988, vol. 8, no. 5, p.41-49. on-line available at: http://ccom.unh.edu/sites/default/files/publications/ware_1988_ cga_color_sequences_univariate_maps.pdf geoinformatics fce ctu 10, 2013 38 http://www.jtie.upol.cz/clanky_2_2009/oravcova.pdf http://www.richardphillips.org.uk/maps/map1980.pdf https://is.cuni.cz/webapps/uksessionc6acffbaf6b62adfcff0f049f7b7b649/zzp/download/130000895/?back_id=3 https://is.cuni.cz/webapps/uksessionc6acffbaf6b62adfcff0f049f7b7b649/zzp/download/130000895/?back_id=3 http://is.muni.cz/th/209487/pedf_b/bakalarska_prace.pdf http://is.muni.cz/th/209487/pedf_b/bakalarska_prace.pdf http://ccom.unh.edu/sites/default/files/publications/ware_1988_cga_color_sequences_univariate_maps.pdf http://ccom.unh.edu/sites/default/files/publications/ware_1988_cga_color_sequences_univariate_maps.pdf proposal of a python interface to openmi, as the base for open source hydrological framework robert szczepanek division of hydrology, cracow university of technology poland email: robert szczepanek.pl keywords: hydrological framework, python, openmi, open source, water framework directive abstract hydrologists need simple, yet powerful, open source framework for developing and testing mathematical models. such framework should ensure long-term interoperability and high scalability. this can be done by implementation of the existing, already tested standards. at the moment two interesting options exist: open modelling interface (openmi) and object modeling system (oms). openmi was developed within the fifth european framework programme for integrated watershed management, described in the water framework directive. openmi interfaces are available for the c# and java programming languages. openmi association is now in the process of agreement with open geospatial consortium (ogc), so the spatial standards existing in openmi 2.0 should be better implemented in the future. the oms project is pure java, object-oriented modeling framework coordinated by the u.s. department of agriculture. big advantage of oms compared to openmi is its simplicity of implementation. on the other hand, openmi seems to be more powerful and better suited for hydrological models. finally, openmi model was selected as the base interface for the proposed open source hydrological framework. the existing hydrological libraries and models focus usually on just one gis package (hydrofoss – grass) or one operating system (hydrodesktop – microsoft windows). the new hydrological framework should break those limitations. to make hydrological models’ implementation as easy as possible, the framework should be based on a simple, high-level computer language. low and mid-level languages, like java (sextante) or c (grass, saga) were excluded, as too complicated for regular hydrologist. from popular, high-level languages, python seems to be a good choice. leading gis desktop applications – grass and qgis – use python as second native language, providing well documented api. this way, a python-based hydrological library could be easily integrated with any gis package supporting this programming language. as the openmi 2.0 standard supported interfaces only for java and c#, the python interface for openmi standard, presented in this paper, is the first step done towards the open and interoperable hydrological framework. gis-related issues of the openmi 2.0 standard are also outlined and discussed. introduction mathematical modelling in hydrological sciences gets use of geospatial functions from early 1990s [15]. recent development in remote sensing and automatic data acquisition technologies lead to increase of available data. analysis and processing of those data in distributed models geoinformatics fce ctu 2011 93 szczepanek r.: proposal of a python interface to openmi, as the base for open source hydrological framework is very difficult without access to geospatial systems. hydrogis conferences held in 1993 and 1996 in vienna have shown the big interest in practical application of geographical information systems (gis) in the field of hydrology. at the cracow university of technology (poland), in late 1980s, we have developed a gis based, distributed hydrological model wistoo [27]. the model was successfully implemented in several polish and few european catchments. unfortunately, due to the closed license and monolithic structure in c language, the model is hard to maintain and develop. we faced a problem of how to build a new modern environment for hydrological modelling. in modular programming, software is composed of separate, interchangeable components called modules, accomplishing one or more functions. the modules improve development of software by defining logical boundaries between the program components. this makes modular system more reusable than monolithic one, as modules can be reused in other programs. example of such module is r.watershed in grass [18]. modules are collected into programs (packages, toolsets) or libraries. grass and qgis [29] are examples of programs, and their goal is to accomplish certain functions. the libraries contain a code that provide services to independent programs. the gdal [9] and proj4 projects are examples of libraries. the modules are typically incorporated into the program through interfaces. the interfaces are abstract types exposing internal behaviours to external modules. a framework can be seen as a collection of software libraries with defined application programming interfaces (api). it is a foundation structure for developing applications. in terms of software design, the framework is a reusable software template, or skeleton, from which key enabling and supporting services can be selected, configured and integrated with the application code [21]. in general, an open source hydrological framework composes of common interfaces with underlying libraries, and its main goal is to meet the needs of hydrological modellers, by easy customization and adaptation of available libraries (fig. 1). figure 1: general structure of open source hydrological framework there are two general approaches to linking hydrology and gis. first one is based on the development of hydrological functions within the gis environment. such models are easier to integrate with gis and provide better interoperability. examples of this approach can be found in grass [18] or saga [12], module-based systems. an alternative approach focuses on hydrological functions, and gis capabilities are treated as additional. in such a case, it geoinformatics fce ctu 2011 94 szczepanek r.: proposal of a python interface to openmi, as the base for open source hydrological framework is very hard and time consuming to build geospatial capabilities of the system from scratch. it is much easier to build a hydrological system on the top of the existing applications (like qgis) or libraries (like gdal), than adding geospatial functionalities to the hydrological system. in that sense, proper selection of gis foundation is the crucial step. hopefully, there is a big diversity of gis projects in the world of free and open source software for geomatics (foss4g). on the other hand, the hydrological framework should not be coupled with just one geospatial system. there are many great sources of code for such base gis system: grass, saga, qgis, thurban, whitebox gat or hidrosig [28]. there exist also ongoing projects to integrate some of the mentioned applications. several programs are already able to use the sextante [22] library (see udig [36], gvsig [34], openjump [36]) or grass modules (see qgis). an interesting project – qgis processing framework – was also run to build an environment for execution of modules from external projects (saga, grass, otb [13], ossim [36]) within the qgis application. there is a big variety of applications and libraries, but the problem i faced during research was how to provide a hydrologist-friendly programming environment. once developed, the modules should be reusable and easy to modify, not only by the author, but also by other scientists. in many existing hydrological applications, based on the low-level computer languages, changes in the existing code for regular hydrologist are difficult. instead of fighting with the code, one should focus on the problem to be solved. to reach this goal, a framework, easy to use and implement, and based on a high-level computer language is needed. such framework should be available with standard interfaces, good documentation, tutorials and case studies. another important issue related to many existing applications is the lack of interoperability and redundancy. the redundancy of functions/modules can cause inconsistency, as the implementation methods can be different in different applications. it will be probably easier for hydrological community to develop and maintain just one implementation instance. in fact, there are already many elements of this puzzle, but they are hard to merge. to make hydrologist researcher life easier, it is not another hydrological application that is needed, but a library with interoperable functions of the basic hydrological processes. based on a module from such library, any application can be build on top, and several alternative hypothesis or approaches can be tested easily. it should be also possible to use other external models compatible with the selected interface, or even to use a hydrological library as the web processing service (wps) back-end. modelling frameworks in the year 2000, the european union approved the european water framework directive (wfd 2000/60/ec). wfd become an important document for the integrated river basin management and interdisciplinary studies not only on our continent. many countries faced the problem of weak interoperability of the existing information systems, coming from different environmental domains. additionally, trans-boundary issues and international cooperation become an important element of wfd reporting. big diversity of standards in europe is a fact; one of potential options was common versatile interface built on top of the existing systems. based on the experience from previous projects, two groups initiated development geoinformatics fce ctu 2011 95 szczepanek r.: proposal of a python interface to openmi, as the base for open source hydrological framework of completely different modelling frameworks: object modeling system (oms) [7] and open modelling interface (openmi) [26]. the assumptions of those two systems were different, so finally two different solutions were elaborated. the first one (oms) was very simple and pragmatic, while the second one (openmi) was more sophisticated and strictly oriented to the wfd needs. the simpler framework allows only linear workflows of modules/models, while the second one enables feedbacks and looping. oms object modeling system (oms) framework has been developed in a joint approach by the u.s. geological survey (usgs), the u.s. department for agriculture (usda) and the friedrichschiller-university from jena, germany [7]. the prototype version oms 1.0 was published in 2001; the latest version oms 3.0, called next generation modeling framework (ngmf), has been released recently. oms is pure java, lightweight, object-oriented modelling framework working in the netbeans environment. oms 3.0 provides programming interfaces in fortran, c and c++. it supports geospatial integration, calibration tools and sensitivity analysis. in the newest version, a minimal invasive approach was applied, and, as a result, no framework data types and no interfaces were provided. oms 3.0 is multithreaded. its components are plain java objects enriched with descriptive metadata by means of language annotations [8]. the annotations are being utilized to specify resources in a class, that relate to its use as a component. they allow for the extension of the java programs with meta information that can be picked up from sources, classes, or at runtime. the oms component metadata implements just three methods – initialize, execute and finalize. the components are designed with a standard well-defined interface in mind. they are self-contained units from the conceptual and technical perspective, and can be developed and tested individually [19]. several environmental models are implemented using oms 3.0. one of them is the precipitation runoff modeling system (prms/oms). one of the newest oms implementations, interesting from hydrological point of view, is the jgrasstools project [14] based on the jgrass application. the jgrasstools is fast growing and very well documented library of basic hydrological and geomorphological algorithms. openmi openmi was developed in 2001 within the fifth european framework programme as a part of the harmonit project, and later the openmi-life projects. partners of the project were: natural environment research council (uk), dhi (dk), deltares (nl), wallingford software (uk), national technical university of athens (el), university of thessaly (el), aquafin (be), vmm – ak (be), flanders hydraulics research (be) and université de liège (be). the idea was to easily combine programs from different providers, enabling a modeller to make free choice of the best model suited to the particular needs. the openmi standard is an interface definition for the computational core of the computer models in the water domain [25]. model components that comply with this standard can, without any programming, geoinformatics fce ctu 2011 96 szczepanek r.: proposal of a python interface to openmi, as the base for open source hydrological framework exchange data at run-time [10]. openmi is a pull-based architecture that consists of linked components (source components and target components) which exchange memory-based data in the single-threaded architecture. the openmi standard is defined by a set of software interfaces that a compliant model or component must implement. the interfaces are available in the c# and java languages. version 2.0 of openmi released in november 2010 specifies the base interfaces and the extension supporting the timeand space-dependent component (openmi.standard2.timespace) [26]. to support the development of openmi compliant components, a software development kit (sdk) has been provided for .net developers [26]: • backbone – a default implementation for the majority of the openmi.standard2 interfaces, • developmentsupport – some very generic support utilities, • buffer – utilities for timestep buffering and time interpolation and extrapolation, • spatial – utilities for spatial interpolation, • modelwrapper – utilities to facilitate wrapping of existing models. the usage of the openmi namespace is the mandatory part of any openmi compliant software component. in standard documentation [26][24], list of interfaces is described, as minimal and as complete as possible, to define exactly the data that is being exchanged. every compliant component must have an associated registration file in the xml format and implement the ibaselinkablecomponent interface. the exchange items are used in the initial phase to define what the component can provide (as output), and what information the component accepts (as input). in openmi 2.0, values can be also transformed as needed with help of adopted outputs. the openmi does not use standardized data dictionaries, except si units. the component life is divided into five phases: initialization, configuration, preparation, execution and completion. every phase is well described in documentation [24]. in august 2011, the openmi association and open geospatial consortium (ogc) have signed the memorandum of understanding to cooperate in standards development and promotion of open standards related to computer modelling. at the moment, spatial operations are based on vector objects and no direct raster support is available (fig. 2). spatial elements are represented as points, lines, polylines, polygons or polyhedrons in element sets containing information about the georeference system in the form of wkt [20]. it is possible to link 1d and 2d models/components and exchange data between them in run time. to perform more advanced spatial operations, openmi authors prepared sdk (extension spatial) and sample code files. from the beginning, openmi was released under the lesser general public licence (lgpl). openmi 1.4 compliant software includes, i.a., infoworks, isis, swat, sobek and mike. open source hydrological libraries and applications the first group of interest are complete hydrological open source models. they are standalone applications or the applications created within one of the popular gis platforms. the geoinformatics fce ctu 2011 97 szczepanek r.: proposal of a python interface to openmi, as the base for open source hydrological framework figure 2: interfaces related to spatial data in openmi 2.0 most popular and known open source hydrological models are: • hydrofoss [5], gipe, topmodel [4], answers [30] (built on the grass project), • geotop [32] (jgrass), • ihacres [6] (saga), • pihm [37] (qgis), • his desktop [1], • hidrosig [28], • hydroplatform [11] (thurban), • kalypso [2], • hype [16]. of the mentioned above, the kalypso model has been recently one of the most popular. there are also open source hydrological modules available in the proprietary software, like taudem [35] in arcgis. functions for geomorphological analysis based on the digital elevation models can be found in almost every gis package. also, almost every gis package can be used for hydrological preand post-processing of spatial data. there are however packages which contain much more advanced and oriented strictly to hydrological analysis functionalities such as: • r.watershed, r.drain.r.flow, r.stream.* (grass), geoinformatics fce ctu 2011 98 szczepanek r.: proposal of a python interface to openmi, as the base for open source hydrological framework • terrain analysis – hydrology, channels (saga), • sextante, • jgrasstools. many gis applications have built-in functions used for hydrological purposes. there are also external spatial data analysis libraries like sextante [22][34]. this java library contains more than three hundred algorithms for both raster and vector processing, and has bindings to gis applications like gvsig, udig, openjump, kosmo and arcgis [23]. there are also ongoing implementations of sextante as the back-end for geoserver and 52north wps server [23]. this library includes 10 algorithms for the basic hydrological analysis, 15 for indices and other hydrological parameters, and 11 algorithms for the geomorphometry and terrain analysis. some packages (e.g. grass and sextante) have graphical modellers (user interfaces) for more user-friendly workflow design and implementation. this is substitution of the previously used scripting languages, for less experienced users. the mentioned graphical modellers are able to model only onedirectional linear workflows. having so many options to choose, proper selection of a package is the major problem. direct comparison is not so easy. open source packages give access to source code, but sometimes it is hard to analyse implementation of algorithms in different languages written by different programmers. so, is the implementation of the same algorithms comparable? in the last two years a synergy effect in the desktop gis applications and libraries can be observed. several foss4g projects cross-reference each other. the leader in this field is probably the qgis project with a qgis processing framework, aiming at development of generic framework for external packages integration. at the moment, the qgis project has a part of grass modules included in the last releases, the saga modules are almost ready to work under qgis, and the orfeo toolbox and ossim modules are planned as the next qgis toolboxes. an advantage of a library compared to an application is the fact that the library must be interoperable to survive, so its implementation should be simpler. a well-designed library should be interoperable not only in terms of the interface (hardcoding vs. openmi), but also in terms of the access method (direct access vs. wps). the presented libraries and applications represent only small part of all available resources. towards interoperable hydrological framework the main goal of development of the hydrological framework is to provide a relatively easy but powerful environment for research in hydrological sciences. this is more oriented to educational purposes than operational hydrology. within the presented work, two steps towards developing the open source hydrological framework have been made. the first one was the analysis of potential elements to be used. it included gis platforms, hydrological libraries and applications, modelling frameworks and interfaces. of course, all of them are open source. based on this analysis, decision on optimal platform selection was made. as result of this analysis, the second step was concentrated mainly on adoption of a selected interface standard for this project’s purposes. geoinformatics fce ctu 2011 99 szczepanek r.: proposal of a python interface to openmi, as the base for open source hydrological framework in general, when building an open source hydrological framework, the following goals were taken into consideration: • free and open source license, • implementation of one of the popular modelling frameworks; powerful, with long-time support, • reusability of components, • simplicity of coding and use of cross-platform computer language, • use of already existing code and algorithms, • scalable architecture, • simplicity of implementation (research), but not computational efficiency (operational purposes), • compatibility with popular gis environments (both server and desktop), • gui independency. selection both presented frameworks (openmi and oms) are modern, well designed and versatile. openmi has strong support from the leading water resources firms in europe, while oms is more u.s. public administration related. openmi pays much attention recently on the spatial aspects of modelling. oms is java-based while openmi implements both c# and java interfaces. big advantage of oms compared to openmi is simplicity of implementation. one of the arguments in favour of openmi is that many of existing hydrological models, listed by the community surface dynamics modeling system, plan to implement this standard. there are already examples of openmi use as a link between different models, like pihm and taudem [17], his desktop [1], but mainly on windows platform. finally, the openmi standard has been selected as a base for the developed hydrological framework. through the openmi interface, it will be possible to access models from different consumers. in the first stage, the access from desktop applications, like qgis and grass, will be tested. web services like wps, installed as server applications, are the second potential consumer. finally, it will be possible to get use of any openmi compliant, external model (component). in the drihms project [33], focused on high performance computing, openmi is used as the interface between components. there are many valuable open source hydrological libraries and modules to be included in the developed hydrological framework. the first practical limitation is the problem of platform dependency. many existing hydrological applications focus on one operation system only. his desktop [1], developed by the consortium of universities for the advancement of hydrologic sciences (cuahsi), is good example of that. it uses c#/.net limiting his desktop usage to microsoft windows operating system. and even the mono project, does not seem to be the best direction to solve this problem. that is why the cross-platform languages, like java or python, are probably a better choice. geoinformatics fce ctu 2011 100 szczepanek r.: proposal of a python interface to openmi, as the base for open source hydrological framework the existing gis applications with hydrological functions use very different computer languages. the most popular are: • c/c++ – grass, qgis, saga, • java – udig, sextante, • c# – hydrodesktop, • python – thurban. there are, however, bindings to another languages, for example, the grass, qgis and saga plugins, can be easily created in python. as the hydrological framework should be used mainly be non-professional programmers, the most popular but difficult low-level languages, like c and java, were excluded [31]. a perfect language should be easy to learn and implement. from the world’s top10 languages, only python is a high-level, open source and cross-platfom language with good support by gis applications. according to the tiobe portal, python was "programming language of the year 2010" with the highest rise in the ratings. the python interpreter is not the fastest engine, especially when compared with compiled languages like c++, but this can be optimized using tools like theano [3] to compile mathematical expressions and get use of the graphics processing unit. only hydroplatform uses python as the native language. two popular desktop gis applications (qgis and grass) use python as the second language in development. finally, python have been selected as the basic language for hydrological framework. the only problem is that there are no openmi interface standards for python. having openmi as the modelling framework and python as the basic language, the important step in framework implementation was the translation of the c# openmi specification to python. implementation as there are substantial differences between the source (c#, java) and target (python) languages of the openmi interfaces, to keep compatibility with the source, no major changes have been done and all the comments were copied from the source files. the first significant difference between the source and target language is related to the type definition. python is dynamically typed and does not require an explicit declaration of variables before they are used. so the data types in a python version of openmi have rather descriptive function and rely on the "duck typing" paradigm. the second difference relates to the implementation of interfaces. the interface mechanism is typical for the c# and java languages. in python, the interfaces are often implicit and defined only by usage and documentation. python support multiple inheritance, so i decided to implement the openmi interfaces as classes. this should give better picture of interfaces functionality to new developers. the naming conventions and namespaces standards have not been changed yet and follow the openmi c# implementation. as the hydrological framework should follow the open source philosophy, all code under development is available to the public in internet. the project was named open hydrology, geoinformatics fce ctu 2011 101 szczepanek r.: proposal of a python interface to openmi, as the base for open source hydrological framework and most of its necessary infrastructure is hosted on the sourceforge portal (http://sourceforge.net/projects/openhydrology/). most of the openmi 2.0 specification standard was translated to the python language. at this stage, no gui and sdk for the python version of openmi are available. sample implementations, documentation, gui for easy model access and workflow modeller will be built in the future. discussion in the time when new projects are launched every day, more and more initiatives tend to integrate efforts to better use the available human resources. the gdal project is a good example of this process. in the before-gdal era, every gis package used its own mechanism for data access. the same situation is now with hydrological models. there are several redundant models unable to exchange data with each other. in order to provide good interoperability, the hydrological framework design is based on the mature and tested modelling framework openmi. there are some threats related to this choice. one of them is the complexity of openmi standard implementation. python as the basic language for new hydrological framework was relatively easy choice, as this computer language has already mature status in foss4g software. the problem is that in the hydrological domain it is not very popular language yet. translation of the openmi 2.0 standard to python interfaces was the first step needed to start the development of the framework. there are two potential options for the second step. development of new and complete openmi components in python (fig. 3a), as the long-term goal, is the first one. the second option, faster, but rather short-term, is the wrapping of the existing hydrological c modules (fig. 3c), for example from grass. for this purpose, python is a very good choice. figure 3: alternative locations of openmi interfaces (component logic) the openmi interfaces can be available at any level of the hydrological model. preferably higher decomposition into elementary physical processes, not treating the whole model as one unit, will be better. this will give scalability and usability, but at the cost of higher initial geoinformatics fce ctu 2011 102 http://sourceforge.net/projects/openhydrology/ szczepanek r.: proposal of a python interface to openmi, as the base for open source hydrological framework workload. i hope to attract the open source developers to the open hydrology project, as till now the openmi association is concentrated mostly on the c# and commercial applications. openmi 2.0 is a new standard, and according information on the web page of the project, there are no compliant software available yet. references 1. ames, d. p., horsburgh, j., goodall, j., whiteaker, t., tarboton, d. and maidment, d. (2009). introducing the open source cuahsi hydrologic information system desktop application (his desktop), 18th world imacs/modsim congress, cairns, australia. 2. belger, g., haase, m., jung, t. and lippert, k., (2009). a gis-based platform for environmental and water resources modeling – kalypso open source, geo informatics magazine. 3. bergstra, j., breuleux, o., bastien, f., lamblin, p., pascanu, r., desjardins, g., turian, j., warde-farley, d. and bengio, y., (2010). theano: a cpu and gpu math expression compiler. proceedings of the python for scientific computing conference (scipy) 2010. austin, tx 4. beven, k.j., lamb, r., quinn, p., romanowicz, r. and freer, j. (1995). topmodel, in computer models of watershed hydrology, singh v.p. (ed.), water resources publications, 627-668. 5. cannata , m., (2006). a gis embedded approach for free & open source hydrological modelling, phd dissertation, politecnico di milano. 6. croke, b.f.w., andrews, f., jakeman, a.j., cuddy, s. and luddy, a., (2005). redesign of the ihacres rainfall-runoff model, proceedings of the 29th hydrology and water resources symposium, engineers australia 7. david, o. and krause, p., (2002). using the object modelling system for future proof of hydrological model development and application. proceedings of the second federal interagency hydrologic modeling conference, las vegas, nv, july 28 – august 2, 621626. 8. david, o., ascough ii, j., leavesley, g., and ahuja, l., (2010). rethinking modeling framework design: object modeling system 3.0. iemss 2010 international congress on environmental modeling and software – modeling for environment’s sake, fifth biennial meeting, july 5-8, 2010, ottawa, canada; swayne, yang, voinov, rizzoli, and filatova (eds.) 9. donnelly, f.p., (2010). evaluating open source gis for libraries, library hi tech, vol. 28 iss: 1, 131-151. 10. gregersen, j.b., gijsbers p.j.a. and westen s.j.p. (2007). openmi: open modelling interface. journal of hydroinformatics 9(3), 175-191. 11. harou, j., pinte, d., hansen, k., rosenberg, d., tilmant, a., medellin-azuara, j., pulido-velazquez, m., rheinheimer, d., matrosov, e., reynaud, a., kock, b., and vicuna, s., (2009). hydroplatform.org – an open-source generic software interface and web geoinformatics fce ctu 2011 103 szczepanek r.: proposal of a python interface to openmi, as the base for open source hydrological framework repository for water management models. international symposium on environmental software systems, isess 2009, venice, italy. 12. hengl, t., grohmann, c.h., bivand, r.s., conrad, o. and lobo, a., (2009). saga vs grass: a comparative analysis of the two open source desktop gis for the automated analysis of elevation data. geomorphometry 2009 conference proceedings, in: geomorphometry 2009, edited by r. purves, s. gruber, r. straumann and t. hengl. university of zurich, zurich, 22-27. 13. inglada, j. and christophe, e., (2009). the orfeo toolbox remote sensing image processing software, geosciences and remote sensin symposium, 2009 ieee international, igarss 2009, cape town, iv 733-736. 14. jgrasstools project, available at: http://code.google.com/p/jgrasstools/ 15. kopp, s.m., (1996). linking gis and hydrological models: where we have been, where we are going? proceedings og hydrogis 96: application of geographic information systems in hydrology and water resources. iahs publ. no.235. 133-139. 16. lindström, g., pers, c.p., rosberg, r., strömqvist, j. and arheimer, b, (2010). development and test of the hype (hydrological predictions for the environment) model – a water quality model for different spatial scales. hydrology research 41.3-4:295-319. 17. lu, b. and piasecki, m., (2008). development of an integrated hydrologic modeling system for rainfall-runoff simulation, american geophysical union, fall meeting 2008, abstract #h41g-0953, available at: http://adsabs.harvard.edu/abs/2008agufm.h41g0953l 18. neteler, m., bowman, m.h., landa, m. and metz, m., (2012). grass gis: a multipurpose open source gis. environmental modelling & software. 19. object modeling system (oms) project, (2011). available at: http://www.javaforge.com/project/oms 20. ogc, (2002). the opengis abstract specification topic 2: spatial referencing by coordinates ogc 01-063r2, opengis consortium inc. 21. ogc, (2012). ogc glossary, available at: http://www.opengeospatial.org/ogc/glossary/ 22. olaya, v., (2012). sextante – spatial data analysis library, available at: http://www.sextantegis.com/ 23. olaya, v. and gimenez, j.c., (2011). sextante, a versatile open–source library for spatial data analysis. 24. the openmi association, (2010). openmi standard 2 reference for the openmi (version 2.0). part of the openmi document series. 25. the openmi association, (2010). scope for the openmi (version 2.0). part of the openmi report series. 26. the openmi association, (2010). openmi standard 2 specification for the openmi (version 2.0). part of the openmi document series. geoinformatics fce ctu 2011 104 http://code.google.com/p/jgrasstools/ http://adsabs.harvard.edu/abs/2008agufm.h41g0953l http://www.javaforge.com/project/oms http://www.opengeospatial.org/ogc/glossary/ http://www.sextantegis.com/ szczepanek r.: proposal of a python interface to openmi, as the base for open source hydrological framework 27. ozga-zielińska, m., gadek, w., ksiażyński, k., nachlik, e. and szczepanek r., (2002). mathematical model of rainfall-runoff transformation – wistoo, mathematical models of large watershed hydrology, ed. v.p.singh, d.k.frevert, water resources publications, 811-860. 28. poveda, g., mesa, o. j. and velez, j. i., (2007). hidrosig: an interactive digital atlas of colombia’s hydro-climatology, journal of hydroinformatics, 9 (2), 145-156. 29. quantum gis development team, (2011). quantum gis geographic information system. open source geospatial foundation project. available at: http://qgis.osgeo.org 30. rewerts, c.c. and engel, b.a., (1993). answers on grass: integration of a watershed simulation with a geographic information system. abstracts, proc., 8th annu. grass gis user’s conf. and exhibition. 31. rey, s.j., (2008). show me the code: spatial analysis and open source, unpublished, available at: http://geodacenter.asu.edu/2008_11 http://geodacenter.asu.edu/2008_11 32. rigon, r., bertoldi, g. and over, t. m., (2006). geotop: a distributed hydrological model with coupled water and energy budgets., journal of hydrometeorology, vol. 7, no. 3, 371-388. 33. schiffers, m., kranzlmuller, d., clematis, a., d’agostino, d., galizia, a., quarati, a., parodi, a., morando, m., rebora, n., trasforini, e., molini, l., siccardi, f., craig, g.c. and tafferner, a., (2011). towards a grid infrastructure for hydro-meteorological research, computer science, vol. 12, 45-62. 34. schröder, d., hildahb, m. and david, f., (2010). evaluation of gvsig and sextante tools for hydrological analysis. 6th international gvsig conference. available at: http://jornadas.gvsig.org/6as-jornadas-gvsig/descargas/articles 35. tarboton, d. g. and baker, m.e., (2008). towards an algebra for terrain-based flow analysis, in representing, modeling and visualizing the natural environment: innovations in gis 13, edited by n. j. mount, g. l. harvey, p. aplin and g. priestnall, crc press, florida. 36. tsou, m.h. and smith, j., (2011). free and open source software for gis education, department of geography, san diego state university, (white paper) 37. yizhong, q., (2004). an integrated hydrologic model for multi-process simulation using semi-discrete finite volume approach, phd thesis, the pennsylvania state university. available at: http://www.pihm.psu.edu/downloads/articles/qu_thesis.pdf geoinformatics fce ctu 2011 105 http://qgis.osgeo.org http://geodacenter.asu.edu/2008_11 http://jornadas.gvsig.org/6as-jornadas-gvsig/descargas/articles http://www.pihm.psu.edu/downloads/articles/qu_thesis.pdf geoinformatics fce ctu 2011 106 postgis-based heterogeneous sensor database framework for the sensor observation service ikechukwu maduako institute of geoinformatics, university of münster, germany director of studies, center for advanced spatial technologies & mapping (cast-mp) abuja, nigeria iykemadu84@gmail.com abstract environmental monitoring and management systems in most cases deal with models and spatial analytics that involve the integration of in-situ and remote sensor observations. in-situ sensor observations and those gathered by remote sensors are usually provided by different databases and services in real-time dynamic services such as the geo-web services. thus, data have to be pulled from different databases and transferred over the network before they are fused and processed on the service middleware. this process is very massive and unnecessary communication and work load on the service. massive work load in large raster downloads from flat-file raster data sources each time a request is made and huge integration and geo-processing work load on the service middleware which could actually be better leveraged at the database level. in this paper, we propose and present a heterogeneous sensor database framework or model for integration, geo-processing and spatial analysis of remote and in-situ sensor observations at the database level. and how this can be integrated in the sensor observation service, sos to reduce communication and massive workload on the geospatial web services and as well make query request from the user end a lot more flexible. keywords: heterogeneous sensor database, postgis 2.0, sensor observation service. 1. introduction geo-sensors gathering data to the geospatial sensor web can be classified into remote sensors and in-situ sensors. remote sensors include satellite sensors, uav, lidar, aerial digital sensors (ads) and so on, measuring environmental phenomena remotely. these sensors acquire data in raster format at larger scales and extent. in-situ sensors are spatially distributed sensors over a region used to monitor and observe environmental conditions such as temperature, sound intensity, pressure, pollution, vibration, motion etc. these sensors are measuring phenomena in their direct environment and could be said to acquire data in vector data format. most environmental monitoring and management systems combine these diverse datasets from heterogeneous sensors for environmental modeling and analysis. for example in monitoring of crop actual evapotranspiration at some locations in most cases involves aggregation of remote and in-situ sensor observations [1]. remote and in-situ sensor data aggregation for real-time calculation of daily crop gross primary productivity gpp such as implemented in geoinformatics fce ctu 8, 2012 55 maduako, i: postgis-based heterogeneous sensor database framework a dynamic web mapping service for vegetation productivity [2] and in the marine information system [3] are good examples too. meanwhile the process of fusing and processing of these sensor data on the web service currently involves massive data retrieval from different sensor databases, most especially from the raster databases, geo-processing and spatial analytics on service middleware. for web services, this is massive work and communication load over the network and on the service. a sensor database management framework combining remote and in-situ observations would be of great impact to environmental monitoring and management systems. having these disparate sensor data on one database schema can be leveraged in the geospatial web services to reduce excessive work load and data transfer through the network. most of the data fusion, aggregations and processing done by web services can be carried out at the database backend and the results delivered to the client through the appropriate web services. figure 1 is a diagrammatical illustration of our proposed approach, whereby in-situ and remote sensor data are passed to the proposed heterogeneous sensor database. data integration and processing are carried out at the database level within the sos and geo-scientific query or request results are delivered to the clients through the service. figure 1: the conceptual diagram geoinformatics fce ctu 8, 2012 56 maduako, i: postgis-based heterogeneous sensor database framework 2. requirement analysis firstly we had to analyse the fundamental conceptual and practical requirements for the proposed heterogeneous sensor database framework for a seamless integration of remote and in-situ sensor observations at the database level of the sos. the analysis is done taking into consideration the varying properties and the underlying structure or format of these two different sensor datasets (raster and vector). the database model for this purpose can be design as a spatial database model based on the open geospatial consortium, ogc standards. adopting the coverage concept, sensor observations can be approached in coverage perspective. that is to say we can treat in-situ sensor observations as time series vector coverage and remote sensor observations as also time dependent raster coverage. coverages have some fundamental properties, exploring some of these properties and how vector and raster coverages inherit these properties, we can conceptually map out an intersection that will underline the seamless integration of vector and raster (in-situ and remote sensors) coverages in a heterogeneous sensor database schema, see figure 2. according to iso 19123: 2005 “a coverage domain consists of a collection of direct positions in a coordinate space that may be defined in terms of up to three spatial dimensions as well as a temporal dimension” [4]. a coverage is created as soon as a way to query for a certain value given a location is created. coverages can be categorised into two, continuous and discrete coverages. continuous coverage returns a different value of a phenomenon at every possible location within the domain. discrete coverages can be derived from the discretisation of a continuous surface. a discrete coverage consists of different domain and range sets. the domain set consists of either spatial or temporal geometry objects, finite in number. the range set is comprised of a finite number of attribute values each of which is associated to every direct position within any single spatio-temporal object in the domain. that is to say, the range values are constant on each spatio-temporal object in the domain. “coverages are like mathematical functions, they can calculate, lookup, intersect, interpolate, and return one or more values given a location and/or time. they can be defined everywhere for all values or only in certain places ” [5]. raster and vector coverages are both types of discrete coverage. they differ only in how they store and manage their collection of data. as coverages, they allow for basic query functions such as select, find, list etc. to be carried out on them. vector coverages handled as tables are the most common type of coverage implemented in most of the spatial database management systems. individual data item are stored on each row in the table. the columns of the table ensure that collection is self-consistent. texts are placed in text columns, numbers in numeric columns, and geometries in geometry columns and so on. the basic requirement a table must have for potential supply of information to a coverage is to have at least one geometry column and one additional column for a value or an attribute. raster coverages are handled as an array of multidimensional discrete data as discussed in [6]. in postgresql/postgis 2.0 [7] precisely they are stored as regularly gridded data with the geometry of the domain as points and the range could be one or more numeric values (for example number of bands). text values and timestamps may not be possible. geoinformatics fce ctu 8, 2012 57 maduako, i: postgis-based heterogeneous sensor database framework hence, in-situ observations (vector data) can be stored in tables with rows and columns in a relational manner, having one-to-one or one-to-many relationship. on the other hand remote sensor observations (raster) cannot reasonably be stored in tables but as gridded multidimensional array of data (array of points). that is to say we only have to leverage the concept of coverage to integrate the two tables in the database. the possible common column for the two datasets (tables) is the geometry column. therefore the fundamental requirement from this analysis that could enable us to integrate remote and in-situ sensor observations in a common database could be outlined as: • storage of in-situ observations as vector point coverage and • storage remote sensor observations as raster point (pixel) coverage. figure 2 is the uml (unified modeling language) model of the concept and management of coverages describing features, relationships, functions and how they present in the database. the insight to this uml model was extracted from the coverage concept model discussed in postgis wiki [5]. figure 2: uml conceptual model of the concept and management of coverages leveraging these functions and operations that can be carried out on coverages, the database can offer fundamental operations and functions such as intersection, buffering, overlay, interpolation etc for geo-scientific analysis and processing involving in-situ and remote sensor geoinformatics fce ctu 8, 2012 58 maduako, i: postgis-based heterogeneous sensor database framework observations (vector and raster coverages). with these operations, we can easily run queries for example that can lift a point on the vector coverage, intersect it with the geometrically corresponding point or cell on the raster coverage on the database and return a value. the goal is for us to be able to do relation and overlay operations on the different coverages irrespective of how the coverages are stored. therefore we need a database management system that can provide these supports for this purpose. database management system (dbms) support analysis effective storage and retrieval of vector data has been well developed and implemented in most of the spatial databases such as postgresql/postgis, oracle spatial, mysql, microsoft sql server 2008, spatialite, informix, etc. on the other hand, oracle spatial and postgresql/postgis dbms are currently the only dbms that have substantial support for raster data management. meanwhile oracle spatial supports raster data storage with less support for raster data analysis in the database. however postgresql/postgis 2.0 has relatively good raster support, functions and operations that we can leverage for the feasibility of our research goal. in addition postgresql/postgis 2.0 can be configured with python gdal-bonded to leverage more functionality. postgresql/postgis 2.0 capability to carry out seamless vector and raster data integration makes it favourable in this type of our work than oracle spatial. postgresql/postgis 2.0 can handle pixel-level raster analysis unlike oracle spatial whose content search is based on minimum bounding rectangle (mbr). postgresql/postgis 2.0 uses geospatial data abstraction libraries (gdal) to handle multi-format image input and output and when working with out-db-raster, this is a powerful functionality. postgresql/postgis 2.0 supports gist spatial indexing, gist stands for "generalized search tree" and is a generic form of indexing. gist is used to speed up searches on all kinds of irregular data structures (integer arrays, spectral data, etc) which are not amenable to normal b-tree indexing [8]. in postgresql/postgis 2.0, raster coverage can be created by having a geometry column called raster and attribute columns containing the attributes to the raster (e.g. band number, timestamp and so on). the fundamental database or storage support needed on the raster coverage for efficient seamless integration and analysis with vector coverages such as tiled raster storage, georeferencing, multiband/multi-resolution support and so on are provided by postgresql/postgis 2.0 [9]. structured query language (sql) raster functions and operators for raster manipulations and analysis are substantially supported in postgresql/postgis 2.0, more functions are being developed. 3. conceptual design and modelling of a heterogeneous database schema based on those fundamental requirement analysis, we went on to develop a conceptual model of a heterogenous sensor datbase schema, integrating remote and in-situ sensor observations. the uml model in figure 3 shows the high level abstraction model of the fundamental classes (tables) that are needed in a sensor database, their attributes and important operations that can be carried out on them. it shows the relationships and the logic between the classes which geoinformatics fce ctu 8, 2012 59 maduako, i: postgis-based heterogeneous sensor database framework enable integration between the classes. the entity relationship (er) diagram in figure 4 describes the logical design for physical implementation of the entities, the fields in each class and the relationships between entities. also in this section we developed the conceptual model of how the database model can be integrated with other web services seamlessly, introducing the concept of the web query service, wqs. 3.1. the heterogeneous database schema entity description this section describes the functions and relationships of the entities or tables in the heterogeneous sensor database schema as modeled in the uml diagram shown in figure 3. the detailed description of their attributes and values are not necessary within the context of this paper. figure 3 is the high level conceptual model while figure 4 is the logical model of the database. the list_of _table class contains the list of all the table names in the database. operations like getlist_of_tables and updatelist_of_table can be perform on it from the user end through our proposed sql web query service wqs. the efficacy of this table is to present to the user the names and descriptions of all the tables contained in the database. a “select * from list_of_table” sql instruction from the client end would present a table describing all the tables contained in the database. this is a kind of descibetables operation by the user from the client end. coverage class holds the information about each of the coverages contained in the database such as the id, description etc. the in-situ and remote sensor observations are stored as coverages, vector and raster respectively in the database. therefore it is necessary to have a table that presents the collections and a short description of the coverages contained in the database. observed_phenomenon class is the table that contains the names, descriptions, coverage type etc. of the various geographic phenomena that are contained in the observations. this is different from the features of interest table which contains the different features or formats of these observed phenomena that are of special interest. feature_of_interest class is the table that has the records of different features of the observed geographic phenomenon in the database. sensorplatform class is the table with the record of the sensor platforms on which the sensors are mounted or housed. sensor class is the table that contains the basic attributes about the observing sensor. attributes such as the sensor platform, sensor type, sensor model etc. are contained in this table. sensorinfo table contains information related to the sensor mearsurement and method. attributes such as spatial coverage, temporal coverage, collection frequency, unit of measurement etc. can be found in this table. observation class is the table that connects the sensor, observed_phenomenon, quality, in-situobservations and remoteobservations tables. observation table does not contain the values and time stamps of each observed value, they are contained in the in-situ and remote observations tables. geoinformatics fce ctu 8, 2012 60 maduako, i: postgis-based heterogeneous sensor database framework figure 3: uml conceptual schema model of the proposed heterogeneous sensor database geoinformatics fce ctu 8, 2012 61 maduako, i: postgis-based heterogeneous sensor database framework in-situobservation class is the table that contains the compelete data of each observation that is contained in the observation table where observationtype is in-situ. it has oneto-one relationship with observations table. the relationship between this table and the remote_observation table are handled on the fly leveraging the postgis intersection operation because the two coverages are handled differently in the database. basic operations as well as complex operations such as intersection with raster, interoplation or rasterisastion can be carried out on this class. the attribute called the_geom contains the geometry of each observed data. remoteobservation table contains the raster data of each observation that is contained in the observation table, where observationtype is remote. it has many-to-one relationship with the observation table. its relationship with the in-situobservation are executed on the fly through the geometry columns . its attribute called rast contains the geometry or coordinate information as well as the the data values (geomval). the intersection between the in-situ and remote observations tables is made possible through the intersection of the ‘the_geom’ and the ‘rast’ which is done on the fly. also more complex operations such as calculate, vectorise, intersect with vector can be performed on this class. metadata tables houses some important header data about any raster data contained in the remoteobservations table. it has many-to-one relationship with the remoteobservation table. it can be updated, selected from, listed etc. from the user end through an sqllanguage based request. this table is created implicitly and encapsulated in the remote_observation table and is used to describe the coverages. 3.2. the er-diagram and logical design of the database model figure 4 is the entity relationship diagram and logical design of the proposed heterogenous sensor database. the diagram shows the relationship logics between the tables for an effective physical implementation in postgis, leveraging the primary and foreign keys for seamless integration between the class. the relationship and integration of the in-situobservation and remoteobservation tables are executed on the fly, leveraging their geometry columns and the coverage concept. 4. integrating the heterogeneous sensor database with the ogc web services we propose an sql-based web query service, wqs that delivers sql queries from the user end to the database in the web service . this service can be intergated and accessed from within the user’s web or desktop application. this service provides the cleint the flexibility and ability to construct queries of extensive complexity which is delivered to the database for processing. in this case, aggregations, processing and analysis of remote and in-situ observations are carried out at the database backend. the result of the query can be delivered in different formats such as ascii, gml, kml, tiff, jpeg etc. in compliance with the ogc web mapping services, the wfs, wms and wcs. the user specifies the formats of delivery on the query by using the postgis “st_as*” function. ascii or text results are delivered to the client directly from the database through the wqs. if the request result is to be delivered as a raster coverage, then the query result is a raster or a rasterised vector geoinformatics fce ctu 8, 2012 62 maduako, i: postgis-based heterogeneous sensor database framework figure 4: er-diagram and logical design of the heterogeneous database and will be delivered to the client through the web coverage service, wcs protocol. similar process goes for a vector or vectorised query result which is delivered through the web feature service wfs protocol. the request result can be delivered as a jpeg or png image format to the user through the web map service wms protocol as described in figure 6. 4.1. the concept of the web query service wqs the web query service, wqs is the proposed sql query service that serves query from the client’s web or desktop application to the heterogeneous sensor database. the web query service delivers sql queries from the client application through the network to the sensor database. it makes it easier to build and execute queries on a remote sensor database from any client application. geoinformatics fce ctu 8, 2012 63 maduako, i: postgis-based heterogeneous sensor database framework figure 5: conceptual model of the proposed web query service wqs in figure 5 the sql query is delivered from the frontend dispatcher of client web or desktop application to the query processing and optimization module for optimization and parsing to the backend for query execution. from a web application, the sql query request is dispatched via the http. from within a desktop application, a connection to the database would have to be established manually before queries are sent to the database for execution. proposed conceptual architecture of integrating the heterogenous sensor database and ogc web services. figure 6 describes the conceptual achitecture of our proposed integration of the heterogenous sensor database as part of the sensor observation service with the proposed web query service wqs and other web services to deliver effective results to the end user. the user on the client end, web or desktop application delivers sql queries of any complexity through the wqs to the database. the result of the query is delivered back to the user through the relevant services depending on the format the result is requested. the st_as * postgis function is used in the query to specify the format of delivery. when the user specifies for example st_as geotiff, the raster coverage query result is wrapped in xml and delivered to the client through the wcs protocol. the same process goes for query results specified in st_as jpeg, png and kml or gml which are dilivered through the wms and wfs respectively to the client. if no delivery format is specified in the query, the result is returned back to the client via the wqs by defualt in ascii format. ogc web service operations such as getcapabilities, describesensor, describeplatform, getobservation, describecoverage or getratermetadata, getcoverage, processcoverage etc. are carried out through this web query service wqs by sql queries. geoinformatics fce ctu 8, 2012 64 maduako, i: postgis-based heterogeneous sensor database framework figure 6: proposed conceptual architecture of integrating the heterogeneous database and the web services 5. prototypical implementation and scenario evaluation in this section we did a prototypical implementation of the heterogeneous database model in postgis 2.0 as shown in figure 7. we loaded the tables with in-situ and remote sensor data as described in the logical model. in-situ sensor observations stored as vector coverages and remote sensor observations as raster coverages. in the heterogeneous database we had in-situ and remote sensor land surface temperature lst coverage, sea surface temperature sst coverage, reference evapotranspiration in-situ coverage, normalised difference vegetation index ndvi coverage and so on. afterwards some few scenarios or use cases out of the numerous use cases where the proposed heterogeneous sensor database model can be leveraged to accomplish geo-scientific queries and processing involving remote and in-situ observations were carried out. the query scenarios were executed from a client desktop application (the openjump desktop gis application) after establishing a connection to the heterogeneous sensor database at the server. scenarios ranging from a simple case where a geo-scientist would want to obtain the temperature difference between in-situ and remote temperature observations to a more complex case of estimating daily plant evapotranspiration of a particular location. geoinformatics fce ctu 8, 2012 65 maduako, i: postgis-based heterogeneous sensor database framework figure 7: a screen shot excerpt of the heterogeneous sensor database model with the tables figure 7 is a screen shot excerpt showing the physical implementation of the database model in postgresql/postgis 2.0 database management system. both the remote and in-situ sensor observations efficiently stored for seamless integration. 5.1. scenario 1: in-situ and satellite surface temperature analysis this scenario calculates the temperature difference between the in-situ sensor land surface temperature observation and remote sensor land surface temperature observation of a particular location. listing 8 was used to obtain the required result from within the openjump desktop application. listing 1: scenario 1 implementation sql code s e l e c t val1 , ( gv ). val as val2 , val1 -( gv ). val as diffval , geom from ( s e l e c t s t _ i n t e r s e c t i o n ( rast , t h e _ g e o m ) as gv , t e m p _ v a l u e as val1 , s t _ a s b i n a r y ( t h e _ g e o m ) as geom from i n _ s i t u _ l s t , l s t _ d a y geoinformatics fce ctu 8, 2012 66 maduako, i: postgis-based heterogeneous sensor database framework figure 8: screen short of scenario 1 implementation in openjump w h e r e t h e _ g e o m & rast and s t _ i n t e r s e c t s ( rast , t h e _ g e o m ) and t e m p _ l s t _ i d = 1 ) foo ; here, this query picks up a particular temperature observation from the in-situ land surface temperature ‘val1’, in-situ_lst table of a location where id = 1, compares the temperature value with the corresponding remotely observed temperature, ‘val2’ of that same location on the raster temperature coverage, lst_day and returns the difference, ‘diffval’. figure 8 below is the implementation screen short excerpt from the openjump desktop application showing the connection to the heterogeneous sensor database and the result of the query from within the openjump client desktop application. in figure 8 below, connection to the heterogeneous sensor database and the sql query are depicted on the upper right hand side of the image while the query result on the lower left corner. 5.2. scenario2: estimation of actual crop evapotranspiration et at the database backend we have the in-situ reference evapotranspiration ret coverage from weather automatic stations and ndvi raster coverage of the spatio-temporal attribute in the heterogeneous sensor database. therefore we can calculate the actual evapotranspiration aet, from an aggregation of ret and fraction of vegetation cover fvc, where fvc is derived from ndvi [1]. aet = fvc * re, [1] fvc = n^2 n = (ndvip-ndvimin)/(ndvimax-ndvimin) where geoinformatics fce ctu 8, 2012 67 maduako, i: postgis-based heterogeneous sensor database framework aet = actual evapotranspiration fvc = fraction vegetation cover ret = reference evapotranspiration obtained from in-situ observation ndvip = the ndvi value at a point p ndvimax = the maximum ndvi value within the entire area of observation ndvimin = the minimum ndvi value within the entire area of observation in the query below, a geo-scientist can leverage the simple formula above to obtain the aet of a particular location, having the ret of that particular point on the in-situ observation table and the ndvi coverage of the area as well in the heterogeneous sensor database. to implement this scenario, we could use 1 and 0 as the approximate maximum and minimum ndvi values respectively within the area, this would give us an approximate estimation not very precise. but to obtain the actual ndvimax and ndvimin of the coverage area, we used the sql query below in listings 2 and 3, which can then be factorized in the comprehensive aet query statement in listings 4 to obtain the precise aet. listing 2: sql query to obtain the ndvimax s e l e c t ( s t a t s ). max } from ( s e l e c t s t _ s u m m a r y s t a t s ( rast ) as s t a t s from ndvi o r d e r by s t a t s desc l i m i t 1 ) as foo ; listing 3: sql query to obtain the ndvimin s e l e c t ( s t a t s ). min from ( s e l e c t s t _ s u m m a r y s t a t s ( rast ) as s t a t s from ndvi o r d e r by s t a t s asc l i m i t 1 ) as foo ; in this our example case, we calculated the aet of a point in the ret, in_situ_ret table where id =1 by implementing the sql statement in listing 14 below. from the query in listing 21 and 22, we obtained the ndvimax and ndvimin as 0.86 and 0 respectively and factorized them in as shown in the listing 4 below and got the results from within the openjump client desktop application shown in figure 9. listing 4: scenario 4 implementation sql code s e l e c t ret , ndvip ,( pow ((( ndvip 0 . 8 6 ) / ( 0 . 8 6 0 ) ) , 2 ) ) as fvc , ( pow ((( ndvip 0 . 8 6 ) / ( 0 . 8 6 0 ) ) , 2 ) ) * ret as aet , t h e _ g e o m from ( s e l e c t s t _ v a l u e ( r . rast , i . t h e _ g e o m ) as ndvip , i . v a l u e as ret , s t _ a s b i n a r y ( i . t h e _ g e o m ) as t h e _ g e o m from i n _ s i t u _ r e t i , ndvi r w h e r e r e t _ i d = 1 and s t _ v a l u e ( r . rast , i . t h e _ g e o m ) is not null ) foo ; lots of other scenarios were also tested for example, calculation of weighted mean surface temperature values from a vector buffer. a scenario where, one could select a particular geoinformatics fce ctu 8, 2012 68 maduako, i: postgis-based heterogeneous sensor database framework figure 9: screen short excerpt of a sample scenario 4 implementation result in openjump client end observation in the in-situ temperature observation, create a buffer of a given radius around that observation, then overlap this buffer geometry on the raster temperature coverage and obtains a weighted mean surface temperature value within the buffered region from the raster coverage. also a scenario to describe a raster coverage metadata such as done in the ogc sensor web “describecoverage” to obtain the metadata of a particular coverage. . in this case, we could leverage the postgis raster metadata description function to provide the client side description of a raster coverage through an sql query. in general the results of the sample queries shown above for the mentioned scenarios are alphanumeric or csv formatted. they are returned to the client directly from the database. other results formats are also possible as described in section 4.1 above, depending on how the client wants the results delivered. 6. evaluation and conclusion in our final evaluation of the methods discussed we focus on three major topics, query flexibility, reduction of communication load and work and massive data retrieval load. 6.1. query flexibility the various geo-processing scenarios we implemented in the prototypical implementation exercise from within the openjump client side desktop application show that, this approach of delivering sql based queries from the client end direct to the database backend makes it more flexible for the user on the client end to deliver geo-processing queries of extensive complexity involving in-situ and remote sensor observations. language based query request such as the (sql) has been considered advantageous especially by the database community because is very flexible, declarative, optimizable and more safe in evaluation [10]. this extensive support for different kinds of geo-processing and analysis involving in-situ and remote sensor observations through native sql queries makes this approach advantageous to the current approach of having different geo-processing modules on the web service for specific purposes. in that case users are restricted only to the specific geo-processing capabilities the service offers. geoinformatics fce ctu 8, 2012 69 maduako, i: postgis-based heterogeneous sensor database framework 6.2. reduced communication load the prototypical implementation our proposed heterogeneous sensor database model and the model of how it can be integrated seamlessly with the ogc geo-web services show that these disparate sensor observations can be integrated and managed in a single spatial database leveraging postgis 2.0 functionalities. hence communication load to different databases are invariably reduced. the communication time lag incurred in the downloading of raster images from a flat file database via the ftp, obtaining in-situ observations from an in-situ sensor observation service and integrating the two on the service middleware level is invariably reduced greatly, adopting this heterogeneous database approach. 6.3. work and massive data retrieval load also taking a look at the contents and the processes in dynamic systems such as in [11], [2], [12], [13] and in the ogc web processing services, they provide clients access and results based on pre-programmed calculations and/or computation models that operate on the spatial data. to enable geospatial processing and operations of diverse kinds, from simple subtraction and addition of sensor observations (e.g. the difference between satellite observed temperature and in-situ observation of a location) to complicated ones such as climate change models, requires the development of a large variety of models on the service middleware. this is massive in work load and huge amount of programming on the service. also the data required for these services are usually retrieved dynamically from different databases and services which most times entails massive data retrieval especially from the satellite data (raster) storage. contrarily, by the means of a heterogeneous sensor database model such as developed and implemented in this research leveraging the functionalities of postgis 2.0 database extension, geo-processing and analytics involving remote and in-situ sensor data are carried out at the database backend by native sql request statements. therefore the variety of geo-processing work load on the service middleware is reduced. the service middleware in our case is majorly for service delivery from the client to database and vice versa. massive data retrieval before processing is completely avoided. also massive programming involved in the development of different kinds of geo-processing models on the web service is reduced. 7. further work the practical usefulness of this proposed approach will be very more appreciated and leveraged when we are done with the full implementation of the model, integrating the proposed sensor heterogeneous database and other geo-web services in the sos. this is our next milestone, to fully integrate this heterogeneous sensor database framework and the wqs with other geo-web services for query result delivery to the clients in different formats as described in figure 6. references [1] groundwater and vegetation effects on actual evapotranspiration along the riparian zone and of a wetland in the republican river basin. gregory cutrell, m. evren soylu. nebraska-lincoln : s.n., 2009. geoinformatics fce ctu 8, 2012 70 maduako, i: postgis-based heterogeneous sensor database framework [2] development of a dynamic web mapping service for vegetation productivity using earth observation and in situ sensors in a sensor web based approach. kooistra, l., et al. 4, 2009, sensors , vol. 9, pp. 2371-2388. [3] hamre, torill. integrating remote sensing, in situ and model data in a marine information system (mis). marine information system (mis), in proc. neste generasjons gis. 1993, pp. 181-192. [4] geographic information -schema for coverage geometry and functions. iso. 2009, tc 211 geographic information/geomatics. [5] postgiswiki. postgis userswiki. [online] [cited: september 10, 2011.] http:// trac.osgeo.org/postgis/wiki/userswikicoveragesandpostgis. [6] management of multidimensional discrete data. baumann, peter. issue 4, october 1994, the vldb journal, vol. volume 3, pp. 401-44. [7] store, manipulate and analyze raster data within the postgresql/postgis spatial database. racine, pierre. denver : http://2011.foss4g.org, 2011. foss4g. [8] postgis.refractions.net. postgis 1.5.3 manual. postgis.refractions.net web site. [online] [cited: august 20, 2011.] http://www.postgis.org/docs/ [9] pierre, racine. wktrastertutorial01. postgis. [online] june 2010. [cited: 11 12, 2011.] http://trac.osgeo.org/postgis/wiki/wktrastertutorial01 [10] designing a geo-scientific request language a database approach. baumann, peter. s.l. : springer-verlag berlin, heidelberg, 2009. ssdbm 2009 proceedings of the 21st international conference on scientific and statistical database management. isbn: 9783-642-02278-4. [11] rueda, carlos and gertz, michael. real-time integration of geospatial raster and point data streams. statistical and scientific database management. 2008, pp. 605--611. [12] warmer in-situ and remote data integration. alastairallen, et al. southampton (uk) : s.n., 30th march 2009. national oceanography center. [13] an integrated earth sensing sensorweb for improved crop and rangeland yield predictions. teillet, p m, et al. 2007, canadian journal of remote sensing, vol. 33, pp. 88-98. geoinformatics fce ctu 8, 2012 71 http://trac.osgeo.org/postgis/wiki/userswikicoveragesandpostgis. http://trac.osgeo.org/postgis/wiki/userswikicoveragesandpostgis. http://2011.foss4g.org http://www.postgis.org/docs/ http://trac.osgeo.org/postgis/wiki/wktrastertutorial01 geoinformatics fce ctu 8, 2012 72 new gnss tomography of the atmosphere method – proposal and testing michal kačmařík1, lukáš rapant2 1institute of geoinformatics faculty of mining and geology všb–technical university of ostrava 17. listopadu 15, 708 00, ostrava-poruba, czech republic michal.kacmarik@vsb.cz 2department of applied mathematics faculty of electrical engineering and computer science všb–technical university of ostrava 17. listopadu 15, 708 00, ostrava-poruba, czech republic lukas.rapant@vsb.cz abstract paper is focused on gnss meteorology which is generally used for the determination of water vapour distribution in the atmosphere from gnss measurements. water vapour in the atmosphere is an important parameter which influences the state and development of the weather. at first, the paper presents basics of the gnss meteorology and tomography of the atmosphere and subsequently introduces a new gnss tomography method which doesn't require an extensive network of gnss receivers, but uses only a few receivers situated in a line. after a theoretical concept describing this method and used mathematical background, the results from a real experiment are shown and discussed. unfortunately the results indicate that presented method is not able to provide credible outputs. possibly the main problem lies in an insufficient number of available signals from current global navigation satellite systems (gps and glonass) where the improvement could be expected after the start of galileo and compass. potential ways how to improve the results without increasing the number of satellites are outlined in the last section. keywords: gnss tomography, gnss meteorology, slant wet delay 1. introduction signal from the gnss satellite is during its path through the atmosphere influenced by troposphere and ionosphere. the influence of the ionosphere is possible to eliminate from the gnss measurements using a suitable method of dual-frequency data processing. the troposphere cannot be simply eliminated and, due to its dependence on the variable amount of water vapour in the atmosphere, it cannot be modelled with sufficient accuracy based on external data. however, it is possible to determine the troposphere and thus also the water vapour from gnss measurements. currently used methods for atmospheric water vapour determination (e.g. meteorological radiosondes, remote sensing satellites, water vapour radiometer) are connected with some limitations and disadvantages and these could be completed by processing gnss data. on geoinformatics fce ctu 9, 2012 63 kačmařík, m., rapant, l.: new gnss tomography of the atmosphere method the other side, even this method has its own problems which cause a limitation when used in numerical weather prediction (nwp) models. the main disadvantage is that it can produce only an absolute value of atmospheric water vapour in a zenith direction above the gnss receiver in any time (zenith total delay, ztd). thus, the vertical distribution of water vapour above the receiver, which is the information very important for data assimilation in nwp models, is not known. from this reason the method of gnss tomography, which would allow a three-dimensional study of atmospheric water vapour, has been developed since 2001 (flores et al., 2001, noguchi et al., 2001). at the present day there exists a few approaches how to solve this task, but all of them require a dense network of gnss reference stations to achieve a reasonable spatial and temporal resolution. those networks had to often been built artificially in the past for the needs of tomographic research projects. however, nowadays a possibility of using national reference station networks for creating tomographic systems over area of a whole country becomes an interesting topic. both local and national tomographic systems require accessibility to the data from a large number of gnss reference stations and necessary technical background for its processing. because all contemporary tomographic projects are connected with the necessity of dense network of gnss receivers, authors attempted to propose a new easier method. its core concept is to use only one cross-section throw the atmosphere instead of solving a whole threedimensional network of voxels. thanks to that, the necessity of large number of receivers in a network would be reduced to a few receivers situated in a single line in a terrain. the output would be then limited to the 2d tomographic grid instead of the three-dimensional water vapour reconstruction. 2. gnss meteorology it has been shown many times that the gps system is useful for tropospheric parameters estimation the approach called gps meteorology (bevis et al., 1992, duan et al., 1996). in the past only the signals from system gps were used for the determination of tropospheric parameters. but nowadays a combination of signals from more global navigation satellite systems (gps + glonass) can be used for such a processing. this approach is analogically called gnss meteorology. the standard value of the gnss signal delay due to the atmosphere in the zenith direction is at zero sea level 2.3 m (or 8 ns). troposphere influences gnss signals in two different ways. firstly, signal bends in the response to gradients of the index of atmospheric refraction, travelling a curved path instead of a straight line. secondly, the signal travels throw an environment with some density slower than would travel in a vacuum. the total delay is the sum of those two effects. for the purposes of gnss meteorology, however, ztd can be divided in a different way – to the part caused by the hydrostatic components of the atmosphere (zenith hydrostatic delay, zhd) and part caused by the non-hydrostatic components of the atmosphere (zenith wet delay, zwd). while zhd depends mainly on the atmospheric pressure, zwd depends mainly on the water vapour (zhengdong, 2004). during the gnss data processing both components are modelled separately and the relation is expressed in (1). from a general point of view the hydrostatic characteristics of the atmosphere are rather stable in a time and it is relatively easy to model them. on the other side, the atmospheric geoinformatics fce ctu 9, 2012 64 kačmařík, m., rapant, l.: new gnss tomography of the atmosphere method water vapour amount is temporally highly instable and thus much more difficult for modelling. ztd = zhd + zwd (1) on the basis of zenith total delays (ztd) and meteorological parameters observed in situ of gnss measurements it is possible to precisely determinate the atmospheric water vapour (precipitable water vapour, pwv). significant advantage of the gps/gnss meteorology is its possible high spatial and temporal resolution. by processing data from twenty receivers and deriving pwv values in an hour interval we get 480 values for studied territory and a day. the contemporary used meteorological radiosondes for the whole czech republic provide only 6 pwv measurements a day at two distinct places only. the stable and reliable results even under very severe atmospheric conditions including extreme storms, gales or torrential rains is another advantage of gnss meteorology as confirmed by liou and huang (2000), song and grejner-brzezinska (2009), kačmařík (2010). first attempts to assimilate ztd or pwv values from gps measurements to the nwp models have been already done in the last years of the 20th century. many researches shown that this step has a positive impact on a short-term prediction of rain localization and its power (guerova, 2003, koizumi and sako, 2003, nakamura et al., 2004, peng a zou 2004, vedel a huang, 2004, shoji et al., 2009). in europe, ztd products from gnss data processing are assimilated in nwp models in great britain (bennitt, 2008) and in france (moll et al., 2008). 3. tomography of the atmosphere for the modern generation of nwp models with a horizontal resolution of a few kilometres the existing network of sites launching meteorological radiosondes is not able to provide corresponding information about the vertical distribution of atmospheric water vapour. since the classical gnss meteorology cannot provide vertical profile, but gnss tomography of the atmosphere can what has been already proven by many projects (e.g. gradinarsky a jarlemark, 2004, champollion et al., 2004, troller, 2004, bender et al., 2011, notarpietro et al., 2011). the tomography method is generally known and with a success it is used in many directions of human activity as medicine, rock and material studies, archaeology etc. the principle is a spatial discretization of the space above the network of gnss receivers into a three-dimensional network of cells called voxels. information about the water vapour is reconstructed for each voxel from all accessible slant wet tropospheric delays (swd) observed between gnss receivers and satellites. usually, it is assumed that within one voxel the water vapour is constant. slant total delay of the gnss signal caused by atmosphere can be written (bevis et al., 1992): std = 10−6 ∫ s nds (2) where geoinformatics fce ctu 9, 2012 65 kačmařík, m., rapant, l.: new gnss tomography of the atmosphere method std slant total delay n atmospheric refraction s length of the signal path in gnss tomography slant total delays are provided by independent observations to all visible gnss satellites and the refraction parameters are thus reconstructed from a large number of such delays. this defines an inverse problem, which could results in more than one solution. the primary problem is a limited number of signals given with the number of ground receivers and with a time variable number of distant satellites. using a discretization of space above the ground network of gnss receivers into voxels, the relation (2) can be written ax = m (3) where m vector of i observations, i.e. slant path delays (swd) x vector of j refractivities (n) in j voxels a kernel matrix, the mapping of the state x on the observations m. the matrix elements aij are the subsections of the i-th slant path in the j-th grid cell. (bender et al., 2011) for the tomography purposes the path of signal is assumed to be a straight line and an influence of signal bending is neglected therefore. thanks to this assumption a whole problem of inversion becomes linear and leads to a system of linear equations. 3.1. input data as above mentioned, the slant tropospheric delays in various azimuthal and elevation angles are used for the purpose of gnss tomography and not just ztds which are estimated in classical gnss processing model. more concretely, the swds caused by the non-hydrostatic characteristics of the atmosphere are used instead of std. the relationship between zwd and swd is given as follows: swd(ε,θ) = m(e) × (zwd + cot ε× (gn cos θ + ge sin θ)) + r (4) where ε satellite elevation angle θ satellite azimuthal angle m mapping function gn,ge horizontal gradients for north (n) and east (e) direction r post-fit residual mapping function defines a relationship between tropospheric delay in a zenith angle and those observed in all elevation angles. the basis of any mapping function is 1/ sin ε, which corresponds to the simplest case. currently, the most used mapping function is defined by niell (1996). horizontal gradients are considered to bring in the model the information about an azimuthal anisotropy in the atmosphere and in most accurate application these are usually estimated too. post-fit residual represents a difference between a measured observation and an observation adjusted during the data processing. it is expected to contain the anisotropic part of the geoinformatics fce ctu 9, 2012 66 kačmařík, m., rapant, l.: new gnss tomography of the atmosphere method wet delay, but also various other unmodeled errors which influence the gnss measurements: multipath, errors of antenna phase centre variations or signal noise. some tomography projects use the tropospheric horizontal gradients and/or post-fit residuals as additional information of water vapour when reconstructing the anisotropic part of the atmosphere (flores et al., 2001, noguchi et al., 2001, gradinarsky a jarlemark, 2004, bender et al., 2011), but other authors (gradinarsky and jarlemark, 2004, nilsson et al., 2005) are sceptical of such an approach. kačmařík et al. (2012) used various approaches for deriving swd values from gnss reference station gope and confronted them with in situ water vapour radiometer measurements. the comparison shown that using the tropospheric horizontal gradients or post-fit residuals (with or without correcting for multipath) did not give improved estimated swds. adding the post-fit residuals to the gps swd had only a very small positive impact on bias, while it negatively influenced the standard deviation. the situation slightly improved after using the stacking maps for removing the multipath effect from the residuals. the results show that the post-fit residuals, even after multipath elimination, do not represent only the anisotropic part of the water vapour distribution but represent further insufficiencies in the model. generally, a process of slant wet delay determination can be written as a sequence of these steps (skone et al. 2003, miidla et al. 2008): • ztd computation with optional determination of horizontal gradients or post-fit residuals • zhd computation using an appropriate hydrostatic model and values of meteorological quantities measured at the gnss receiver’s location, derivation of zwd from ztd and zhd • computation of swd for particular observations using appropriate mapping function and optionally horizontal gradients or post-fit residuals with or without multipath correction 3.2. general quality of tomographic reconstructions quality evaluation of gnss tomographic results is possible to perform on the basis of comparison with vertical profiles from meteorological radiosondes or nwp models. regardless of used ground network of gnss receivers or applied tomographic solution so far, results are confronted with two common trends: • during standard atmospheric conditions, when the water vapour descends approximately exponentially with the elevation, the gnss tomography achieves very good results comparable with radiosondes measurements, • in cases of atmospheric inversion or with distinctively different vertical development of water vapour from the standard atmospheric conditions, the tomography can represent this trend only with strong difficulties. this situation is partially given by the necessity of adding constraints in the tomographic system defining mutual relations between neighbouring voxels. if the values between voxels are more smoothed, it is impossible to catch such an inverse trend. on the other way, if the geoinformatics fce ctu 9, 2012 67 kačmařík, m., rapant, l.: new gnss tomography of the atmosphere method weights in the tomographic system are set more freely, the system becomes less stable and is more prone to provide unreliable results. 4. theoretical concept of new tomographic method as already mentioned in the introduction, the main idea of new tomography method is to use only a limited number of gnss receivers situated in a line and reconstruct vertical 2d water vapour distribution. the most signals are crossing throw the 2d voxels situated above the centre receiver and therefore the quality of tomographic reconstruction in these voxels should be the best. therefore the primary output from the method would be a vertical profile for a receiver situated in the middle of the line. vertical profile is an output produced also by meteorological radiosondes, which is perfectly suitable for nwp assimilation requirements. the orientation of used line of receivers in a terrain is very important for the new tomographic method. it directly influences the number of visible satellites in a thin atmospheric strip demarcated parallely with the line of receivers. it is possible to use standard applications for planning gnss measurements to propose a suitable orientation of the line and demonstrate an affordable state. fig. 1 shows a gps satellites constellation for an interval 14.30 – 16.00 utc for selected day at the reference station vsbo. this graphical output was acquired from topcon occupation planning tool. the orientation of notional line was chosen in the east-west direction and simulation of obstructions was provided. thus, only parts of the sky with azimuthal angles between 88◦ – 92◦ and 268◦ 272◦ remained visible. six various gps satellites were accessible in the narrow strip and selected interval. similar, but mostly worse situations held for different parts of a day or different line orientations. therefore proposed method would be generally able to produce results only during a few intervals in a day depending on the actual satellite constellation. also radiosondes measurements are limited with such a phenomenon while being launched each six or twelve hours. besides the maximum number of visible satellites, very important is their elevation. the lowest atmospheric layer above the ground contains the most of water vapour and therefore the signals at low elevations are of the highest interest for gnss tomography. while choosing an ideal line orientation and time interval for the reconstruction, it is important to try to achieve a state with the highest number of visible satellites distributed on various elevations including the lowest ones. 5. terrain measurements, data selection on august 2nd 2012 an 8-hour long testing measurement was done at six gnss receivers situated almost in a line with a northwest-southeast direction. five of those receivers were gnss reference stations (tvid, bisk, tkrn, vsbo, cfrm) while the sixth receiver (topcon hiper gd) was stabilized for the measurement’s time only. all receivers worked in a gps and glonass mode. glonass measurements were included to raise a number of satellites and increase an accessible reconstruction quality. total length of the line was 113 km and distances between two adjacent stations ranged from 21 to 25 km. the line orientation wasn´t ideal but came out from the accessibility of gnss measurements from selected reference stations and overall feasibility to realize such geoinformatics fce ctu 9, 2012 68 kačmařík, m., rapant, l.: new gnss tomography of the atmosphere method figure 1: gps satellites constellation, gnss reference station vsbo, time interval april 11th 2012 14.30 16.00 utc, dark grey colour represents notional obstructions an experiment. generally, an ideal orientation shouldn´t deviate a lot from the east-west direction. for processing gnss measurements bernese gps software (dach et al., 2007) was used. accurate coordinates of temporarily stabilized receiver were determined using a precise point positioning technique (zumberge, 1997). consequently a double differencing technique was used for ztd processing using settings summarized in a table 1 and a europe-wide network containing approximately 30 reference stations. for zhd determination a saastamoinen model and measured values of meteorological parameters were used. slant wet delays were computed from zwd estimated in 30-second intervals from gps and glonass observations. horizontal gradients or post-fit residuals weren´t used for swd computation. values of atmospheric air pressure and temperature were measured only at the station vsbo and in a place of the temporarily stabilized receiver. for locations of rest reference stations the values of atmospheric air pressure were re-counted using the babinet's formula given in (5) and values of pressure and temperature measured at the location of the nearest receiver or at a close synoptic meteorological station operated by the czech hydrometeorological institute. geoinformatics fce ctu 9, 2012 69 kačmařík, m., rapant, l.: new gnss tomography of the atmosphere method z = 16 000 × (1 + t/273) × ((p1 −p2)/(p1 + p2)) (5) where z difference between elevations of point 1 and 2 [m] t atmospheric air temperature measured at point 1 or 2 [◦c] p1 atmospheric air pressure at point 1 [hpa] p2 atmospheric air pressure at point 2 [hpa] after swd computations while horizontal gradients or post-fit residuals had not been added the observations from a narrow strip in a vertical direction above the line of receivers were selected. it was done in a geometrical way. a plane was held through a centre of the earth and two ending points of the line. plane’s equation was computed from the coordinates of these three points in a geocentric coordinate system itrf. a test whether a distance between a particular satellite and the given plane is in a concrete time less than 1 160 km for gps satellite and less than 1 113 km for the glonass satellite was held. values defining those distances were obtained by delimiting an angle of 2.5◦ on a distance of 26 560 km which represents a distance between the centre of the earth and a gps satellite on its orbit. for glonass satellites the distance of 25 510 km was used. list of satellites satisfying given condition and therefore located in a defined strip was saved into a text file. satellites from this file were consequently confronted with the real data from the rinex observation files to exclude satellites satisfying given condition but being located on the opposite site of the earth during the testing terrain campaign. positions of all satellites in itrf coordinate system in 15-minute interval were acquired from files with precise ephemerides used previously for ztd processing. finally, an interval between 18.45 and 19.45 utc was chosen to test the tomographic reconstruction. during this selected period one gps satellite and five various glonass satellites were observed at elevation angles between 7◦ and 168◦. generally it would be better to select a constellation with more gps satellites because there are still some difficulties with processing glonass measurements (mainly it is not possible to fix the ambiguities of glonass ephemerides, satellite clocks code rapid sampling rate 30 s elevation cut-off angle 3◦ mapping function niell phase centre correction igs model applied ocean loading applied observables double differences ztd, gradient estimation interval 30 min pole information code rapid differential code bias information code 30-day solution table 1: basic characteristics of ztd processing in bernese gps sw geoinformatics fce ctu 9, 2012 70 kačmařík, m., rapant, l.: new gnss tomography of the atmosphere method measurements and the quality of some input products like satellites ephemerides and clocks is still lower for glonass than for gps). swd values for gps satellites observations should be therefore on higher quality level. unfortunately according to measured data it was not possible to select any hour interval where more than one gps satellite had been observed. 6. mathematical background of the reconstruction tomographic reconstruction is primarily a mathematical task. in our case certain numerical tools were used to compute 2d tomographic grid. beside slant wet delays the zwd values for particular receivers were used as input. first of all the discretize of tomographic system was done by dividing atmosphere above the stations into rectangular network where vertical lines are placed between the stations and horizontal lines are placed at desired height levels. coordinates of the receivers for this purpose were used and also the vertical restriction in the height of 10 km where amount of water vapour was considered to be null. signals that left the tomographic system below this height of 10 km were excluded from the reconstruction. generally such signals can be included in the reconstruction only if it is possible to determine impact of water vapour on the part of the signal beyond the tomographic system. this operation requires an external source of information about the vertical water vapour distribution. because of relatively high distance between two adjacent stations in the presented solution there were only a few signals which had to be excluded due to this problem. all of them were measured on low elevation angles on receivers situated by the line ends. from the discretization and slant swd measurements, a set of linear equations defined by equation (3) can be created. this gives a matrix a =   a11 · · · a1ls ... ... ... ak1 · · · akls   , (6) where k is number of slant swd measurements, l is number of horizontal layers and s is number of measuring stations. value of each element of the matrix is determined by length of trajectory of slant signal passing through corresponding rectangle. rectangles are numbered from the bottom and the left (ak1 correspond to bottom leftmost rectangle, ak(s+1) corresponds to the leftmost rectangle in the second row and so on). right side of this system is created from the swd of the corresponding measurement m =   swd1 ... swdk   . (7) this gives us a system of equations from (3) from which we can compute amount of water vapours in each rectangle. graphical illustration about the tomographic discretization and input data presents fig. 2. geoinformatics fce ctu 9, 2012 71 kačmařík, m., rapant, l.: new gnss tomography of the atmosphere method however, in the computations there are more slant measurements than the number of rectangles in the discretization. this creates a problem of over determined system of equations. such a system has no exact solution. usual method for solving over stated systems of linear equations is the method of least squares (björck, 1996). the method of least squares solves the problem ax ≈ b, a ∈ rm×n, x ∈ rm, b ∈ rn, (8) where b is not in the field matrix values of a, i.e. there is no such a vector x that solves the equation. the method of least squares tries to find an alternative solution xls, which minimizes min x∈rm ||ax − b||2. (9) it can be rewritten equivalently with vector e, which modifies the right side, as min e∈rn ||e||2 (10) and where (b + e) is in the field matrix values of a. this transforms system of linear equation into task of finding elements of vector e that are minimal in the euclidean norm and satisfy previous condition. it can be done by various means such as qr or svd decompositions and by iterative methods. in our solutions we used lsqr iterative method. after using the method of least squares we obtain approximate solution of linear equations system (3). figure 2: basic graphical illustration about the tomographic system 7. results using computed swd values and specified system the tomographic reconstruction was done. however its results stated in table 2 can´t represent a real water vapour distribution. the geoinformatics fce ctu 9, 2012 72 kačmařík, m., rapant, l.: new gnss tomography of the atmosphere method table shows zwd values reconstructed for particular cells. their horizontal centres were situated to locations of gnss receivers and used tomographic system was divided only into three layers vertically (thickness of two lower vertical layers was 2.5 km and 5 km for the highest layer). generally tomographic projects use about ten vertical layers to represent the vertical distribution of water vapour in the troposphere. the classification into only three vertical layers would be insufficient for real meteorological applications and was chosen just to test the potential of the new tomographic method. if the results were promising a grid with much higher vertical resolution would be tested consequently. because during standard atmospheric conditions approximately 50 % of water vapour is situated in the surface layer of the atmosphere below 1.5 km from the ground and above 5 km there is only 5 % of its amount (seidel, 2002), reconstructed zwd with the highest values in the top layer can’t represent a real state of the atmosphere prevailing during the test measurements. 10.5 10.5 11 11.5 11 9.5 3.25 3.25 1.25 0.75 2.25 5.5 3.75 -0.75 5 5.25 2.75 0 point 1 point 2 point 3 point 4 point 5 point 6 table 2: tomographic reconstruction results, zwd values in [cm] for particular cells. the bottom layer is in the lowest row of the table. but the conditions of convergence for successful run of the least squares method were not fulfilled what denotes an insufficient amount of input swd data. whole situation indicates that two contemporary accessible global navigation satellite systems gps and glonass are not able to provide a sufficient number of signals for a possibly meaningful usage of proposed tomographic method. 8. future insights potential improvement in described problem with low number of gnss signals can be expected after start of galileo and compass systems when the number of satellites in space will be practically doubled. except the increase in an absolute number of gnss signals it should also theoretically bring more time intervals during the day, when higher number of satellites will be situated in a narrow strip above the line of receivers. more detailed work at optimization and broaden of the mathematical apparatus used for the first testing purposes would also help. the usage of iterative component trying to find the most ideal reconstruction result based on the input data and a given initial state of the atmosphere would possibly help and should be implemented in a near future. the initial state will be derived from the standard atmospheric conditions when the water vapour decreases exponentially with the increasing height. some tomographic projects use external source of data about the water vapour (e.g. numerical weather prediction models, radiosonde profiles) to support the gnss tomographic reconstruction but we do not consider doing that in future. it would bring new problems with the necessity of assimilation such a data in the solution and also made the suggested method dependent on external sources. geoinformatics fce ctu 9, 2012 73 kačmařík, m., rapant, l.: new gnss tomography of the atmosphere method 9. conclusion gnss tomography of the atmosphere is a modern method allowing spatial and temporal study of the water vapour distribution in the atmosphere. so far presented projects were focused on three-dimensional reconstructions above large networks of gnss receivers. a new tomographic method based on measurements from a line of a few receivers and reconstructing a 2d vertical grid above the line was proposed and described in the paper. unfortunately the results based on described terrain measurements and mathematic solution are not too much promising. situation indicates that gps and glonass systems can´t provide sufficient number of observations for this method. authors expect a possible improvement after an optimization of mathematic apparatus, considering further constraints or applying a priori data and using galileo and compass systems in future. if it is proven that new method can provide credible information about vertical distribution of water vapour it could became an effective source of data for nwp models and possibly reduce the necessity of meteorological radiosondes. acknowledgment this work was supported by the european regional development fund in the it4innovations centre of excellence project (cz.1.05/1.1.00/02.0070). data acquisition of the gnss continuous operating reference station vsbo is supported by the project of large research infrastructure czechgeo/epos, grant no.lm2010008. references [1] bender, m., dick, g., ge, m., deng, z., wickert, j., kahle, h.-g., raabe, a., tetzlaff, g. development of a gnss water vapour tomography system using algebraic reconstruction techniques, advances in space research, vol. 47, issue 10, pp. 1704-1720, 2011 [2] bennitt, g. use of ground based gnss data in nwp at uk met office, e-gvap workshop, copenhagen, denmark, 6th november 2008 [3] bevis, m., businger, s., herring, t.a., rocken, c., anthes, r.a., ware, r.h. gps meteorology – remote-sensing of atmospheric water-vapor using the global positioning system, journal of geohysical research atmospheres, vol. 97, issue d14, pp. 1578715801, 1992 [4] björck, å. numerical methods for least squares problems. society for industrial and applied mathematics, usa, isbn 0-89871-360-9, 1996 [5] champollion, c., masson, f., bouin, m.-n., walpersdorf, a., doerflinger, e., bock, o., van baelen, j. gps water vapour tomography: preliminary results from the escompte field experiment, atmospheric research, vol. 74, p. 253-274, 2004 [6] dach, r., hugentobler, u., fridez, p., meindl, m. gps bernese software, version 5.0, astronomical institute, university of berne, berne, 2007 [7] duan, j., bevis, m., fang, p., bock, y., chiswell, s., businger, s., rocken, c., solheim, f., van hove, t., ware, r., mcclusky, s., herring, t., king, r. gps meteorology: direct geoinformatics fce ctu 9, 2012 74 kačmařík, m., rapant, l.: new gnss tomography of the atmosphere method estimation of the absolute value of precipitable water. journal appl. m., 24(24), 830–838, 1996 [8] flores, a., rius, a., vilá-guearou, j., escudero, a. spatio-temporal tomography of the lower troposphere using gps signals, phys. chem. earth (a), vol. 26, no. 6-8, pp. 405-411, 2001 [9] gradinarsky, l. p., jarlemark, p. ground-based gps tomography of water vapor: analysis of simulated and real data, journal of the meteorological society of japan, vol. 82, no. 1b, pp. 551-560, 2004 [10] guerova, g. application of gps derived water vapour for numerical weather prediction in switzerland, phd thesis, university of bern, 2003 [11] kačmařík, m., douša, j., zapletal, j. comparison of gps slant wet delays acquired by different techniques, acta geodynamica et geomaterialia, v. 9, no. 4(168), 2012, in print [12] kačmařík, m. monitoring of precipitable water vapour by gps under extreme weather conditions, advances in geoinformation technologies 2010, všb – technical university of ostrava, pp. 151-161. isbn 978-80-248-2357-7, 2010 [13] koizumi, k., sato, y. impact of gps and tmi precipitable water data on mesoscale numerical weather prediction model forecasts, journal of the meteorological society of japan, vol. 82, no. 1b, pp. 453-457, 2004 [14] liou y.-a., huang ch.-y. gps observations of pw during the passage of a typhoon, earth, planets and space, vol. 52, pp. 709-712, 2000 [15] moll, p., poli, p., ducrocq, v. use of ground based gnss data in nwp at météo-france, e-gvap workshop, copenhagen, denmark, 6th november 2008 [16] miidla, p., rannat, k., uba, p. simulated studies of water vapour tomography, wseas transactions on environment and development, issue 3, vol. 4, march 2008 [17] nakamura, h., koizumi, k., mannoji, n. data assimilation of gps precipitable water vapor into the jma mesoscale numerical weather prediction model and its impact on rainfall forecasts, journal of the meteorological society of japan, vol. 82, no. 1b, pp. 441-452, 2004 [18] niell, a. e. global mapping functions for the atmosphere delay at radio wavelengths. journal of geophysical research, vol. 101, b2, pp. 3227-3246, 1996 [19] nilsson, t., gradinarsky, l., elgered, g. assessment of tomographic methods for estimation of atmospheric water vapour using ground-based gps. chalmers university of technology, göteborg, sweeden, 2005 [20] noguchi, w., yoshihara, t., tsuda, t., hirahara, k. time-height distribution of water vapor derived by moving cell tomography during tsukuba gps campaigns, journal of the meteorological society of japan, vol. 82, no. 1b, pp. 561-568, 2004 [21] notarpietro, r., cucca, m. gabella, m., venuti, g., perona, g. tomographic reconstruction of wet and total refractivity fields from gnss receiver networks, advances in space research, vol. 47, pp. 898–912, 2011 geoinformatics fce ctu 9, 2012 75 kačmařík, m., rapant, l.: new gnss tomography of the atmosphere method [22] seidel, d. j. water vapor: distribution and trends. the earth system: physical and chemical dimensions of global environmental change, john wiley & sons, ltd, 2002 [23] shoji, y., kunii, m., saito, k. assimilation of nationwide and global gps pwv data for a heavy rain event on 28 july 2008 in hokuriku and kinki, japan, scientific online letters on the atmosphere, vol. 5, pp. 45-48, 2009 [24] skone, s.h., shrestha, s.m.: strategies for 4-d regional modeling of water vapour using gps, wuhan university journal of natural sciences, vol. 8, no. 2b, 627-635, 2003 [25] song, d.-s., grejner-brzeninska, d. remote sensing of atmospheric water vapor variation from gps measurements during a severe weather event, earth, planets and space, vol. 61, n. 10, pp. 1117-1125, 2009 [26] troller, m. gps based determination of the integrated and spatially distributed water vapor in the troposphere, phd thesis, swiss federal institute of technology, zurich, switzerland, 2004 [27] vedel, h., huang, x. impact of ground based gps data on numerical weather prediction, journal of the meteorological society of japan, vol. 82, no. 1b, pp. 459-472, 2004 [28] zhengdong, b. near-real time gps sensing of atmospheric water vapour, phd thesis, queensland university of technology, australia, 2004 [29] zumberge, j. f., heflin, m. b., jefferson, d. c., watkins, m. m., webb, f. h. precise point positioning for the efficient and robust analysis of gps data from large networks. journal of geophysical research, 102(b3):5005–5017, 1997 geoinformatics fce ctu 9, 2012 76 automated point clouds processing for deformation monitoring ján erdélyi slovak university of technology in bratislava department of surveying jan.erdelyi@stuba.sk abstract the weather conditions and the loading during operation cause changes in the spatial position and in the shape of engineering structures that affect static and dynamic function and reliability of these structures. because of these facts, geodetic measurements are integral parts of engineering structures’ diagnosis. the advantage of terrestrial laser scanning (tls) over conventional surveying methods is the efficiency of spatial data acquisition. tls allows contactless determination of the spatial coordinates of points lying on the surface on the measured object. the scan rate of current scanners (up to 1 million of points per second) allows significant reduction of time, necessary for the measurement; respectively increase the quantity of the information obtained about the object measured. to increase the accuracy of the results, selected parts of the monitored structure can be approximated by single geometric entities using regression. in this case the position of measured point is calculated from tens or hundreds of scanned points. this paper presents the possibility of deformation monitoring of engineering structures using the technology of tls. for automated data processing was developed an application based on matlab®, displacement tls. the operation mode, the basic parts of this application and the calculation of displacements are described. keywords: terrestrial laser scannig, deformation monitoring 1. terrestrial laser scanning the technology of terrestrial laser scanning (tls) is a non-selective method of spatial data acquisition. tls determines the 3d coordinates of the measured points on the surface of the measured object in a grid, which is defined by regular angular spacing in the horizontal and vertical directions [8]. the result of tls is an irregular raster of measured points, the so-called point cloud, which documents the measured object (fig. 1). the difference between tls and conventional surveying methods is that the coordinates of characteristic points are obtained by modelling, respectively by generalization of the main elements of 3d models or the resulting point cloud [9]. most of the current tls works on the principle of spatial polar method. the spatial position of measured points are calculated from the measured horizontal and vertical angles and from the measured slope distance (fig. 2 left). the optimal source of radiation of electromagnetic waves for scanning systems are lasers. these are used for contactless measuring of distances. laser beams are highly monochromatic geoinformatics fce ctu 14(2), 2015, doi:10.14311/gi.14.2.5 47 http://orcid.org/0000-0001-9492-2775 http://dx.doi.org/10.14311/gi.14.2.5 http://creativecommons.org/licenses/by/4.0/ j. erdélyi: automated point clouds processing for deformation monitoring figure 1: the point cloud of the bridge no. 137, bojnicka street in bratislava and have a narrow spectral line width compared to other sources of radiation. the deflection of the laser beams is provided by oscillating mirrors, rotating prism, by rotation of the laser source around horizontal and vertical axis of the instrument or by fiber optics, resp. by combination of the methods above mentioned [8]. the most common used is the combination of rotation of instrument around the vertical axis, and an oscillating mirror. figure 2: the principle of spatial polar method (left), and the combination of rotating source and oscillating mirror (right) the process of data acquisition and the modelling using tls can be divided into three main steps. the first one is the preparation for the measurement (for scanning), recognition of the measured object, the choice of positions for the instrument, signalization of the control points. the second step is the process of scanning and the third one is the processing of data obtained by tls. the data processing contains: • preparation of the point cloud for data processing. this includes initial adjustments of point cloud: error elimination, filtering and data reduction, transformation (between different coordinate systems), elimination of unnecessary points, and coloring of points (assigning the colors according to the intensity of measuring signal or from photographs). • processing of data obtained by tls. spatial model creation of the measured object or its parts, determination of geometric parameters (e.g. dimensions) and deformations of chosen parts of the measured object. geoinformatics fce ctu 14(2), 2015 48 j. erdélyi: automated point clouds processing for deformation monitoring • visualization. rendering of the created model, and creation of animations. 2. deformation monitoring the weather conditions and the loading during operation cause changes in the spatial position and in the shape of engineering structures that affect static and dynamic function and reliability of these structures. because of these facts, geodetic measurements are integral parts of engineering structures’ diagnosis [1, 3]. one of the ways of deformation monitoring of engineering structures using tls is the use of differential models [4, 6]. this method is used to determine the displacements of large surfaces, e.g. surface of the bottom of bridge deck. from the point cloud is created a triangular network at every epoch of measurement. from the differences between them is determined the displacement of the measured structure. these differences are measured in a defined grid in perpendicular direction to the reference plane defined in the epoch of initial measurement [7]. the disadvantage of this method is the lower accuracy, because the triangles are created from the points determined by accuracy of several millimeters (depends on the accuracy of the chosen instrument). to increase the accuracy of the results of deformation monitoring, selected parts of the monitored structure can be approximated by single geometric entities using regression. in this case the position of the measured points is calculated from tens or hundreds of scanned points. vertical displacements of the measured points on the bottom part of the selected structures can be determined as the difference between the z coordinates of these points in each measurement epoch (fig. 3). the position of the measured points is modelled by small regression planes using orthogonal regression. figure 3: vertical displacement determination of part of the harbor bridge in bratislava orthogonal regression is calculated from the general equation of a plane geoinformatics fce ctu 14(2), 2015 49 j. erdélyi: automated point clouds processing for deformation monitoring a ·x + b ·y + c ·z + d = 0 (1) where a, b and c are the parameters of the normal vector of the plane, x, y and z are the coordinates of the point lying in the plane, d is the scalar product of the normal vector of the plane and the position vector of any point of the plane. the orthogonal distance of a point from the plane is calculated by dp,% = |a ·xp + b ·yp + c ·zp + d|√ a2 + b2 + c2 (2) the requirement of orthogonal regression is that the sum of the squares of orthogonal distances have to be minimal, so n∑ i=1 |a ·xi + b ·yi + c ·zi + d|2 a2 + b2 + c2 = min (3) where n is the number of points used for the calculation of the plane. partial derivation of (3) with respect to d leads to 2 · n∑ i=1 |a ·xi + b ·yi + c ·zi + d| a2 + b2 + c2 = 0 (4) according to the previous formula, the parameter d can be formulated as d = −(a ·x0 + b ·y0 + c ·z0) (5) and the formula for the general equation of a plane becomes a · (xi −x0) + b · (yi −y0) + c · (zi −z0) = 0 (6) where (xi −x0), (yi −y0) and (zi −z0) are the coordinates of the point cloud reduced to a centroid. for each point of point cloud is possible to write a formula according to (6),then the design matrix of the system of equations has the form a =   (x1 −x0) (y1 −y0) (z1 −z0) (x2 −x0) (y2 −y0) (z2 −z0) ... ... ... (xn −x0) (yn −y0) (zn −z0)   (7) orthogonal regression is calculated by applying singular value decomposition geoinformatics fce ctu 14(2), 2015 50 j. erdélyi: automated point clouds processing for deformation monitoring a = uσvt (8) where a is the design matrix, with dimensions n× 3, and n is the number of points used for the calculation. the column vectors of un×n are normalized eigenvectors of matrix aat . the column vectors of v3×3 are normalized eigenvectors of at a. the matrix σnx3 contains eigenvalues on the diagonals. then the normal vector of regression plane is the column vector of v corresponding to the smallest eigenvalue from σ [2, 5]. the position of the observed points in xy plane is defined as fixed. the advantage of this procedure is that the position of the measured points does not change with the thermal expansion of the structure. the z coordinates (heights) of the measured points are calculated by projecting the points onto regression planes (fig. 4) using formula zp = − a ·x + b ·y + d c (9) figure 4: determination of the height of measured points the standard deviation of the results are calculated using uncertainty propagation law, from the standard deviation of the vertical component of the transformation error and the standard deviation of the regression planes σzp = √ σ2tz +σ 2 % (10) where σtz is the vertical component of the error of the data transformation and σ% is the standard deviation of the calculated regression plane. geoinformatics fce ctu 14(2), 2015 51 j. erdélyi: automated point clouds processing for deformation monitoring the transformation of the point clouds in each measurement epoch is needed to obtain data in a common coordinate system in each measurement epoch. the accuracy of the transformation is given by the differences (∆x, ∆y, ∆z) between the identical reference points after the transformation of the scanned point cloud of the current measurement epoch into the coordinate system of the initial measurement epoch. the standard deviation of the regression planes is calculated from the orthogonal distance of the points of point cloud from these planes. dispersion of the points around the plane reflects the random error (noise) of the distance measurement by tls (coordinate determination) mainly. to eliminate the effects of systematic errors it is recommended to perform the measurements during the same conditions in each epoch (position of the scanner, temperature, etc.). the effect of the systematic errors is included in the accuracy of determination of coordinates of the reference points (stable objects in each epoch). 3. displacement tls application the proposed procedure of vertical displacement determination, depending on the number of the points of the point cloud, places high demand on computing used for the data processing, as well as on the operator itself (definition of large number of fences, export/import the data, regression calculation, etc.). for the partial automation of the above mentioned procedure, a computational application “diplacement tls” was developed (fig. 5). the application is based on the computational software matlab® by mathworks®. figure 5: dialog box of application displacement tls the application was created as a standalone app; however the matlab runtime is necessary to be installed. the work with the app is as follows: in the first step the user can choose a work directory in which the resulting files will be saved. the second step is the point cloud file loading in *.txt or *.xyz file format which contains the coordinates of scanned points. geoinformatics fce ctu 14(2), 2015 52 j. erdélyi: automated point clouds processing for deformation monitoring the measured points can be arranged in *.xls or *.xlsx file defining the coordinates of the monitored points. the vertical displacements are calculated in the points defined by the mentioned file and are transformed to perpendicular displacements using the normal vectors of planar surfaces. the user can load the resulting file of the previous measurement epoch in the initial previous measurement box (for comparison of point heights). without this file the result will be an *.xlsx file containing the heights of the measured points. the fencing boxes, selecting part of the point cloud around the measured points, are defined by its dimensions along axis x, y and z. these boxes define approximately the same set of points in each measurement epoch. the standard deviation of the registration (vertical component of the error of the data transformation) is necessary for the calculation of the standard deviation of the results using the uncertainty propagation law. the results are shown in the table on the right side of the app’s dialog window and are saved into an *.xlsx file in the work directory. the table contains the ids of the measured points, their coordinates, the standard deviation of the height of measured points (z coordinates), the vertical displacements, the perpendicular displacements and the standard deviation of displacements. a figure which shows the point cloud and the displacement vectors in a relative scale is created for the better imagination of the results. 3.1. conclusion the issue of inspection of safety operation of engineering structures is always current and closely joined with the activities of surveyors. one of the most important tasks is the determination of the displacements of the selected parts of these structures using different surveying methods. monitoring of engineering structures can be performed using tls also. to obtain the accuracy comparable with the accuracy of the conventional surveying methods, selected parts of the monitored structure should be approximated by single geometric entities using regression algorithms. the paper describes the procedure of displacement determination of engineering structures from point clouds acquired by tls. the proposed procedure is based on the modelling of the selected part of structures using small planar surfaces by orthogonal regression. the z coordinates (heights) of the measured points are calculated by projecting the measured points onto regression planes. the vertical displacements of the measured points on the bottom part of the selected structures are determined as the differences between the z coordinates of these points in each measurement epoch. this procedure significantly improves the accuracy of the resulting displacements, because the position of the measured point is calculated form tens or hundreds of scanned points, not only from one discrete scanned point. an application based on software matlab® displacement tls was developed for automated data processing. the above mentioned procedure of deformation determination is performed and controlled with help of the graphical user interface of the application. the application was created as a standalone app; however the matlab runtime is necessary to be installed. acknowledgement this work was supported by the scientific grant agency of the ministry of education, science, research and sport of the slovak republic and the slovak academy of sciences under the geoinformatics fce ctu 14(2), 2015 53 j. erdélyi: automated point clouds processing for deformation monitoring contract no.vega-1/0445/13. references [1] jaroslav braun and martin štroner. “geodetic measurement of longitudinal displacements of the railway bridge”. in: ingeo 2014 and fig regional central and eastern european conference on engineering surveying. vol. 1. isbn 978-80-01-05469-7. 2014, pp. 231–236. [2] aleš čepek and jan pytel. “a note on numerical solutions of least squares adjustment in gnu project gama”. in: interfacing geostatistics and gis. springer science + business media, 2009, pp. 173–187. doi: 10.1007/978-3-540-33236-7_14. [3] alojz kopáčik et al. “dynamic deformation monitoring of a technological structure”. in: geodetski list 67(90).3 (2013), pp. 161–174. issn: 0016-710x. [4] bronislav koska et al. “monitoring of lock chamber dynamic deformation.” in: proceeding of measuring the changes 13th fig symposium on deformation measurements and analysis and 4th iag symposium on geodesy for geotechnical and structural engineering. lnec: lisabon, 2008. [5] vladimír lacko. “singulárny rozklad matíc a úskalia programovej realizácie golubovho algoritmu na jeho určenie (singular value decomposition and difficulties of software implementation of golub algorithm and its determination)”. in: student science conference. comenius university in bratislava, 2008, p. 69. [6] jiří pospíšil, bronislav koska, and tomáš křemen. “using laser scanning technologies for deformation measuring”. in: optical 3-d measurement techniques. vol. 2. isbn 3-906467-67-8. zurich: swiss federal institute of technology zurich, 2007, pp. 226–233. [7] thomas schäfer et al. “deformation measurement using terrestrial laser scanning at the hydropower station of gabčíkovo”. in: ingeo 2004 and fig regional central and eastern european conference on engineering surveying. [cd-rom]. bratislava: kgde svf stu, 2004, p. 10. isbn: 87-90907-34-05. [8] martin. štroner et al. 3d skenovací systémy. isbn 978-80-01-05371-3. česká technika nakladatelství čvut, čvut v praze, 2013. [9] george vosselman and hans-gerd maas. airborne and terrestrial laser scanning. whittles publishing, 2010, p. 336. isbn: 978-1904445-87-6. geoinformatics fce ctu 14(2), 2015 54 http://dx.doi.org/10.1007/978-3-540-33236-7_14 implementing inspire for the czech cadastre of real estates jiří poláček, petr souček the czech office for surveying, mapping and cadastre (cosmc) prague, the czech republic jiri.polacek@cuzk.cz petr.soucek@cuzk.cz abstract the article is dedicated to the topic of the implementation of inspire directive within the information system of the czech cadastre of real estates. the procedure of implementation of the inspire directive for cadastral related themes, which started in 2008, is followed. currently running view and download services as well as experience with its operational run are described. finally an overview of the implementation problems and scheduled follow-up activities are outlined. keywords: inspire, implementation, cadastre of real estates, view service, download service, wms, wfs 1. introduction this contribution is dedicated to the topic of the implementation of inspire directive within the information system of the czech cadastre of real estates. czech cadastre of real estates is open concerning provision of information. the cadastral mapping in the austrian empire started in 1817 by an imperial directive, which enabled to denunciate and obtain this way neighbour’s property, if he did not enter his real estate to the cadastre. for this purpose was cadastre publicly open from the very beginning. the principle of openness showed to be very useful after mortgages have been introduced in the cadastre evidence as well. since that, cadastre in our country is open; everybody can view it for free. the czech gis users are since 1993 supported by cadastral data services (generally paid) involving exchange format for the written part and digital cadastral map as well as scanned cadastral maps in raster form. the access to cadastral information is also according to the czech cadastral law very open. the accessibility of cadastral information was improved by the information system of cadastre of real estates (iscre), launched in 2001, especially with: • establishing the central database which contains cadastral data of the whole territory of the czech republic and being updated every 2 hours. • the application called “remote access“(ra), which is generally paid and for registered users only. ra is a comprehensive application with a broad variety of functions, which enables an on-line access to cadastral information. as a reflection of necessary need of free cadastral information, cosmc prepared the application called “viewing cadastre”, which started with the operational run in 2004. the application enables to access basic data linked with specified property (owner, area, culture etc.), it’s first version handled with the descriptive information only. this free application geoinformatics fce ctu 8, 2012 9 poláček j., souček p.: implementing inspire for the czech cadastre became in a short time the most frequently visited governmental web page within the czech republic. recently we can monitor more than 1.8 mil. visits per month, more than 1,500 parallel users in rush hours (see following table) visits ip addresses data amount hits (thousands) (thousands) (gb) (millions) 2007 670 132 76,1 29,755 2008 1026 197 876,4 116,841 2009 1118 235 1351,1 149,353 2010 1247 291 1770,8 210,744 2011 1748 366 4145,9 357,198 2012* 1980 394 5343,3 456,408 table 1: average monthly statistics of demands on viewing cadastre application (* 9 months) 2. inspire implementation process in cosmc shortly after the inspire directive entered into force in may 2007, preparation of its operational run at cosmc started. the impulse for this activity was the fact, that the approved inspire directive declared free of charge viewing for its themes. great demand of our users linked with functionality has been expected especially for „cadastral parcels“ and „buildings“ themes, which represents approximately 85% of the czech cadastral map content. we started to consider operational requirements and economical aspects of the expected run has been considered and finally a decision to start some kind of viewing services as soon as possible to get an experience has been made. the first version of wms for the cadastral maps has been launched in april 2008. the exchange format of the digital cadastral map has been used as a source of vector graphical information, in this phase of the project was this information approximately 20 days old. these wms (free with no registration required) were used to introduce graphical information into the „viewing cadastre“ application as well. after data specification of “cadastral parcels“ theme was approved, cosmc took part in testing data specification as a lmo in 2008-2009. we succeeded in making the conversion into the inspire gml format and recognized some conversion problems to solve. 3. conversion problems crucial problem was the conversion of map accuracy. inside the iscre is so called quality code linked with separate measured (or digitized) points, on the other side inspire data specification for the „cadastral parcel“ theme enables to link accuracy with the parcel boundary line only. this conversion of point accuracy into line accuracy was not easy, because it had to take into account the specific methodology for the creation and maintenance the czech digital cadastral map and inserting a point with lower accuracy on the straight line could not downgrade the original accuracy. the implementation had to be robust enough because it was proceeded online. as a result, cadastral map with distinction of an accurate and less accurate parcel boundaries has been derived. this output is also wildly used in viewing cadastre application as so called “green and red” cadastral map (see picture 1). another problem to solve was parcel boundaries in the form of arcs and circles. gml 3.2.1 geoinformatics fce ctu 8, 2012 10 poláček j., souček p.: implementing inspire for the czech cadastre supports arcs and circles, but data specification for cadastral parcel theme stated, that parcel boundaries can be only in a line form. we had to substitute arcs by polylines, but we store original geometry in gml as well. furthermore some topological aspects of this conversion had to be taken in account. furthermore, improvement of accuracy of the geometrical transformation from the czech cadastre coordinate system jtsk to the etrs 89 has to be mentioned. these activities were carried out by the research institute of geodesy, topography and cartography. figure 1: accuracy of parcel boundaries in viewing cadastre application (green – accurate, red – less accurate) 4. publication database after successful testing a discussion about optimal way of inspire implementation within the czech cadastre of real estates started. it was clear, that iscre data structure is so different, that we can’t easily adopt inspire data specification inside it. finally in 2009 a decision has been made to create „publication database“ online linked to the iscre database. the goal was not only to implement the inspire directive, but to offer web services, which could enable for our users and internal cosmc applications access up-to-date cadastral maps. the publication database contains graphical data necessary for viewing and download inspire services. original graphical data are converted into iso model (harmonized form). geoinformatics fce ctu 8, 2012 11 poláček j., souček p.: implementing inspire for the czech cadastre data are updated on the fly from the source iscre database, in near future also from the territorial identification system for the addresses and administrative units themes (see chapter “download services”). updating process is accomplished using oracle pl/sql scripts and replications. inside this database is being the necessary conversion performed. the conversion covers necessary input data checking (including the topological ones). probably the most laborious part of the implementation was the incremental concept of updating the publication database, especially to link database transactions performing the conversion with replications of changes, and to maintain error conditions. this process had to be sophisticated and robust enough to manage changes in 2-hours period. the services (wms, wfs) provided from this database handle with data approximately 2 hours old. 5. view services view service for “cadastral parcels” theme is available on the address http://services. cuzk.cz/wms/inspire-cp-wms.asp. this service was launched on 9.5.2011 and is available for free with no registration required. the service runs using harmonized data of the cadastral parcel theme. the complete cadastral map contains a non-harmonized wms (broadening of the harmonized one) on http://services.cuzk.cz/wms/wms.asp. the harmonized service offers following layers: • parcel boundaries and parcel numbers (up to the 1:20 000 scale) • parcel boundaries (up to the 1:20 000 scale) • boundaries and names of the cadastral units (up to the 1:100 000 scale) see selected samples at figure 2. 6. wms operational run and inspire implementation problems the demand for this service has been instantly growing as shown on the following table; for instance in 2011 the number of visits of wms reached almost 70, 000 per month, with more than 22, 800 unique ip addresses, almost 23 mil. hits (get map) and more than 208 gb of uploaded data. in rush hours the number of requests is more than 3,500 per minute. visits ip addresses data amount hits (thousands) (thousands) (gb) (millions) 2008 19,4 5,5 24,7 1,0 2009 32,4 6,7 54,7 3,0 2010 41,4 8,9 105,1 8,5 2011 70,0 22,9 208,6 23,0 2012** 151,8 50,5 253,3 42,5 table 2: average monthly statistics of demands on wms (** 9 months) the general trouble with operational run of free service linked to the wildly used applications is that it is almost impossible to guarantee optimal reaction time for it under all circumstances. for example, publishing a link to the viewing cadastre application (which is using wms) by geoinformatics fce ctu 8, 2012 12 http://services.cuzk.cz/wms/inspire-cp-wms.asp http://services.cuzk.cz/wms/inspire-cp-wms.asp http://services.cuzk.cz/wms/wms.asp poláček j., souček p.: implementing inspire for the czech cadastre figure 2: samples of wms layers an electronic version of a popular newspaper caused, that demand on the application raised five times more within few hours and as a result responses of wms exceeded required limits. common problem of the inspire implementation is term of publication of implementing rules and technical guidelines. some of them are published so late, that it is difficult to keep deadlines. for example technical guidance for the implementation of inspire download services has been published on june the 14. it’s a bit late considering that we should start this service in the non-harmonized form on 28.th june. the implementation is also negatively influenced by the fact, that ogc, iso a inspire standards differ and are significantly shifted in time. for example inspire metadata and data specifications are related to different versions of iso standard. geoinformatics fce ctu 8, 2012 13 poláček j., souček p.: implementing inspire for the czech cadastre the requirements on the implementing staff should be mentioned in this context. these people must be experts in a specific theme (in our case in the cadastre of real estates) and at the same time govern technologies like uml, gml e.g. on a high level. 7. download services further progress of the services connected with the implementation of the inspire directive in the czech republic has been significantly influenced by establishing the base registers being a part of the czech e-government, done especially by a principal decision to provide the data from the base register of territorial identification, addresses and real estates (brtiare) free of charge. the content of this register practically covers the requirements for the following themes: “cadastral parcels“, „buildings“, ”addresses“ and “administrative units“. this is the reason why it was decided to provide the download services of those inspire themes free of charge, without any registration as well. 21.5.2012 launched cosmc pilot run of inspire download services for the “cadastral parcels” theme. operational run of this service started on 22.6.2012. this service provides data in a vector form from already digitized regions (approximately two thirds of the czech republic territory). this download services are provided for free, no registration is required. both download service for pre-defined data set and direct access download service (wfs) are supported. data and services are harmonised according following specifications: • draft implementing rules for download services (version 3.0). • inspire data specification on cadastral parcels guidelines v 3.0.1. pre-defined data set: pre-prepared files (gml version 3.2.1) are updated once daily, separate file for each cadastral unit is being created. coordinate systems supported for pre-defined data set: name epsg file location s-jtsk krovak east north 5514 http://services.cuzk.cz/gml/inspire/cp/epsg-5514/ etrs 89 4258 http://services.cuzk.cz/gml/inspire/cp/epsg-4258/ file names are structured as xxxxxx.zip, where xxxxxx means 6-digits code for cadastral unit (according following cadastral unit code list) http://www.cuzk.cz/dokument.aspx? prareskod=10&menuid=10015&akce=doc:10-cise_kuap. 8. direct access download service for wfs (web feature service) is following functionality supported: getcapabilities, describefeaturetype, liststoredqueries, describestoredqueries a getfeature according 2.0.0 version of the ogc standard (http://www.opengeospatial.org/standards/wfs). basic url to link wfs is http://services.cuzk.cz/wfs/inspire-cp-wfs.asp. some applications require full address including getcapabilities: http://services.cuzk.cz/wfs.asp? service=wfs&version=2.0.0&request=getcapabilities. geoinformatics fce ctu 8, 2012 14 http://www.cuzk.cz/dokument.aspx?prareskod=10&menuid=10015&akce=doc:10-cise_kuap http://www.cuzk.cz/dokument.aspx?prareskod=10&menuid=10015&akce=doc:10-cise_kuap http://www.opengeospatial.org/standards/wfs http://services.cuzk.cz/wfs/inspire-cp-wfs.asp http://services.cuzk.cz/wfs.asp?service=wfs&version=2.0.0&request=getcapabilities http://services.cuzk.cz/wfs.asp?service=wfs&version=2.0.0&request=getcapabilities poláček j., souček p.: implementing inspire for the czech cadastre xsd for the download service is located on http://services.cuzk.cz/xsd/wfs/basicfeature/. wfs limits: • for cadastralboundaries and cadastralparcels is the request limited on the area of 1 km2 and 10000 elements, • for cadastralzoning the limit is 400 km2 and 500 elements, • general sql is not supported. supported coordinate systems: name epsg code s-jtsk krovak east north 102067 s-jtsk / krovak 2065 pulkovo 1942 / gauss-kruger cm 15e 2493 pulkovo 1942 / gauss-kruger cm 21e 2494 etrs89 / laea europe 3035 wgs 84 / world mercator 3395 popular visualisation crs / mercator 3785 wgs 84 / pseudo-mercator 3857 etrs89 4258 wgs 84 4326 s-jtsk / krovak east north 5221 s-jtsk/05 / modified krovak 5224 s-jtsk/05 / modified krovak east north 5225 s-jtsk / krovak east north 5514 s-42 zone 3 s-42 zone 4 28403 28404 wgs 84 / utm zone 32n 32632 wgs 84 / utm zone 33n wgs 84 / utm zone 34n 32633 32634 wgs 84 / spherical mercator 900913 8.1. examples of calling getting specific parcel (by par id) http://services.cuzk.cz/wfs/inspire-cp-wfs.asp?service=wfs&version=2.0.0& request=getfeature&typenames=cadastralparcel&srsname=urn:ogc:def:crs:epsg::5514& filter= geoinformatics fce ctu 8, 2012 15 http://services.cuzk.cz/xsd/wfs/basicfeature/ poláček j., souček p.: implementing inspire for the czech cadastre getting specific area (by bbox) http://services.cuzk.cz/wfs/inspire-cp-wfs.asp?service=wfs&version=2.0.0& request=getfeature&typenames=cadastralboundary& bbox=-757125,-990823,-756712,-990556&srsname=urn:ogc:def:crs:epsg::5514 9. conclusions download services, which are now being run for “cadastral parcels” theme, will be in the first half of 2013 complemented by “addresses” and “administrative units” themes. the term for similar service for “buildings” theme depends on the approval of the final draft for its data specification. the concept of the publication database proved to be very liable. using one technological infrastructure, online wms are provided both strictly according to the inspire implementing rules and even an enhanced form is run, covering content of the whole cadastral map (including its raster form). a great number of external user’s applications are now based on these services, as well as cosmc applications like viewing cadastre, base register of territorial identification and the branch geoportal. furthermore the publication database enables online editing of the dynamic part of metadata, checking graphical cadastral data and is used for some managerial and planning purposes as well. geoinformatics fce ctu 8, 2012 16 designing a new raster sub-system for grass-7 martin hruby it4innovations centre of excellence brno university of technology božetěchova 2, brno, 61266, czech republic hrubym fit.vutbr.cz keywords: grass, gis, raster sub-system, geographical data, 3d rasters abstract the paper deals with a design of a new raster sub-system intended for modern gis systems open for client and server operation, database connection and strong application interface (api). motivation for such a design comes from the current state of api working in grass 6. if found attractive, the here presented design and its implementation (referred as rg7) may be integrated to the future new generation of the grass geographical information system version 7-8. the paper describes in details the concept of raster tiling, computer storage of rasters and basic raster access procedures. finally, the paper gives a simple benchmarking experiment of random read access to raster files imported from the spearfish dataset. the experiment compares the early implementation of rg7 with the current implementation of rasters in grass 6. as the result, the experiment shows the rg7 to be significantly faster than grass in random read access to large raster files. introduction and motivation grass [1] is a well-known open-source gis system with a long history, wide range of included analytical tools and rather large community of users. most of the users cooperate on its development, they comment its quality, do report its bugs and some of them implement their own system or analytical tools in form of grass independent modules. the main grass’es strength is traditionally supposed to be in the raster analysis and so, rasters should be something very well working in grass. unfortunately, the current raster application interface (api) of grass-6 does not offer functions which would comfortably support a development of complicated raster analytical algorithms. moreover, the current raster core probably will not process large raster files efficiently. this paper is giving an alternative which, if found reasonable, might replace the current raster core with this new one – referred as rg7 (rasters for grass-7) in this paper. there are several open source initiatives and traditional software packages oriented to processing of raster files. let us mention at least the famous gdal [5] (eventually gdal/ogr), postgis raster [6] and raster3d [7]. the gdal library is mostly a collection of storage formats and a very large software complex. postgis gives mostly a storage functionality with a certain support for raster data manipulation and analysis – however, via sql statements, which is an opposite approach to rg7. the raster3d library is the only one software intended strictly to grass and, currently, is still in development. comparing to them, the rg7 library has got a different motivation which comes mainly from the user’s perspective. the main idea is to provide a very abstract view to the raster data, moreover, to provide it geoinformatics fce ctu 2011 11 hruby m.: designing a new raster sub-system for grass-7 in highly efficient way. and, at second, to keep the library code as simple as possible. the paper is going to discuss the application interface giving the user a set of functions operating the user data (raster map layers), the format of storing raster data and the whole architecture of the raster sub-system. let us begin from the current state of the grass, it means, let us summarize the current abilities of raster api in grass-6 [2] (and consequently the architecture of the current raster core): • set of functions for opening and closing an existing raster or creating a new raster file. • a function (g_get_map_row(...)) for reading a specified row from a raster layer. • a function (g_put_map_row(...)) for writing a specified row to a raster layer. • other support routines implementing the metadata including an eventual reclass table. clearly, any api must contain functions for opening, creating and closing the raster data. as a new feature, we may discuss a multi-user access to the data, various rights to access the data, remote access to the data (e.g. in form of a grass file server), transactions an so on. system of reading and writing a row of raster is the main point of criticism of the current state of api. the problem has two levels of discussion: at first, user’s approach to the data and at second, system implementation of such a call. in the current state, an user obtains a particular row of the raster no matter what part of that he needs. this kind of a sequential approach is sufficient for rather primitive tools (r.mapcalc, r.in/out, etc.), but let us imagine an analytical algorithm requiring a random access (or better say – an index-sequential access) to the whole raster layer. a developer implementing such a tool has to create his own library over the grass raster api caching the lines and serving as a necessary abstraction over the api. as i suppose, we would rather like let the developer concentrate himself on the core algorithm of his application. on the other hand, the current grass raster sub-system together with the set of raster tools is nowadays optimized for such an approach. it must me told in the very beginning: the here presented approach is not generally better than the current one, but it has a good chance to be a good advantage for future grass’es developmental grow. this paper presents some sort of an opposition to the current row access approach, even if some developers claim that this row-approach is natural for a large part of raster analytic algorithms. some comparisons and contemplations are put in the section "experimental results". let us now give the features of the rg7 raster sub-system: • support for extremely large raster files with a guarantee of constant time access to any part of the layer, i.e. 1) any part of the raster is accessible in the same time as another one, 2) size of the raster has no influence to the access time (or just marginal). • native support for multiple data formats and storing the rasters in a sql database (for example in postgresql [3] or sqlite3 [4]). • real multi-user access and grass ready to work as a file server. • integrated 3d-raster concept. geoinformatics fce ctu 2011 12 hruby m.: designing a new raster sub-system for grass-7 • comfortable api ready to give any sort of view to the data. development of user raster application on a very high abstraction level. • raw raster data and other information packages (mainly the set of various metadata) operated in a unique manner and stored in an unique place, where sql databases are preferred1. processing the extremely large rasters (size of gigabytes) together with any user-specified sliding window over the raster needs a special design of a store format and quick access to sub-region within the file. this is why the rg7 is based on raster tiles. every raster file available through rg7 will be physically decomposed to a grid of so called physical tiles. this is the main difference to the current grass raster concept – not the row, but a tile is the raster atomic element of data. a tile is a block of fixed size no matter what size has got the raster file, i.e. raster files with long rows do not influence the complexity of their processing. the advantages and disadvantages are discussed in the section "conclusion of the experiments". it will be shown, that with the rg7 design, the technical approach of storing the data is absolutely independent to its logical tile representation. moreover, many storing principles can be used, e.g. storing in a classical file managed by an operating system (referred as a os-file), in a sql database or virtualized via network. the main interest of the rg7 design is focused into the set of functions making an interface between an user and the grass kernel. simplifying the style of raster programming might encourage new users to bring their algorithms to the grass code. technical and geographical definition of the tiles definition of a tile a tile is a geographically defined sub-region within a raster layer. a raster file itself is defined by its boundary coordinates and number of cells in north-south and east-west directions. similarly, a tile is defined by its boundary coordinates and number of cells in both directions. by decomposing the raster to a set of tiles, we get an organized system of small data elements (tiles) which are easy to operate. a tile becomes as an almost regular grid decomposition of a raster file, as it is shown in figure 1. this system of recursive decomposition making a spatial index is called the quad-tree (let us assume this term to be a common knowledge in the gis community). as we may see bellow, there are several ways of defining a tile regarding its size. there are generally two approaches to raster file decomposition: • regular all tiles are of the same size and the size is equal in both directions (the tile has a square shape). constructing a spatial index in form of a quad-tree is not necessary to navigate through the raster. an optional implementation of raster pyramids is logical and easy. 1motto: no files, no troubles. geoinformatics fce ctu 2011 13 hruby m.: designing a new raster sub-system for grass-7 – advantages mapping a geographical coordinate to a particular tile has a constant time access, it is easy to implement and operate. – disadvantages not very practical as the input raster must have a square shape and its number of raster elements in both directions (cells) must be 2n, where n ∈ n. • irregular the file is decomposed using a quad-tree procedure with a minimum tile attribute (see bellow). it may be applied to all existing raster files as the limitation of regularity is not required here. an optional implementation of raster pyramids is still possible. – advantages no limits for rasters in the meaning of their size. – disadvantages (time issue) each tile is of different size. mapping a geographical coordinate to a particular tile must be done through the quad-tree spatial index, i. e. in logarithmic time complexity. this computational overhead can be successfully minimized in the internal algorithms of rg7, for example by using an explicit topological links among the neighbor tiles. – disadvantages (space issue) number of tree nodes (i.e. virtual tiles) grows exponentially with height of the tree. the height of tree depends on size of the raster and size of the minimum tile. clearly, the requirement of having all files of size in 2n is not very practical. therefor, the rg7 concept strongly assumes the files of any size and implements the irregular tiling. figure 1: a demonstration of the irregular quad-tree decomposition of a raster to a tree of virtual and physical tiles in rg7, size of a tile is not fixed, i.e. is not equal for all tiles, and it is influenced by so called minimum region. four tiles is the minimum number of tile decomposition, i.e. having a raster (or generally any region) with countx × county cells and minimum region of size mregx × mregy cells, the raster is split into a quadruple of virtual sub-tiles only if (1) geoinformatics fce ctu 2011 14 hruby m.: designing a new raster sub-system for grass-7 holds. if the condition (1) is false, the region countx × county is not split and remains as a physical tile (see figure 1 where the red rectangle defines the minimum region for further decomposition). the regions which allow their decomposition are then called the virtual tiles. countx ≥ 2 ·mregx ∧ county ≥ 2 ·mregy (1) the raster spatial index is based on quad-trees, where the tree nodes are are referred as virtual tiles and leafs (non divisible regions) are the physical tiles. see the algorithm 1. the condition (1) also causes every physical tile to be generally larger then the specified minimum region, thus the minimum region does not denote a wanted size of a tile, it just define the stop condition of the decomposition. each tile (virtual or physical) is addressed by a prefix code as described in figures 2. in figure 2, the top-left tile is labelled 0 and when this one tile is decomposed further (see figure 2 on right), a prefix 0− is added. algorithm 1 decomposition of a region to a tree of sub-regions class ras_vtile { ras_vtile(region reg) : region(reg) { ... } ... void decompose(icoord mreg) { phtile = 0; if (region.countx() >= 2*mreg.x && region.county() >= 2*mreg.y) { int mx = region.countx()/2; int my = region.county()/2; subtiles[0] = new ras_vtile(region.sub_region(0, my, mx, region.county())); subtiles[1] = new ras_vtile(region.sub_region(mx, my, region.countx(), region.county())); subtiles[2] = new ras_vtile(region.sub_region(0, 0, mx, my)); subtiles[3] = new ras_vtile(region.sub_region(mx, 0, region.countx(), my)); for (int a=0; a<4; a++) subtiles[a]->decompose(mreg); } else phtile = new ras_phtile(region); } region region; ras_vtile *subtiles[4]; ras_phtile *phtile; } in such a convention, each physical tile has got its label and using that label, one can find the tile in a quad-tree hierarchy. the label is also an index key to identify the tile in a database storage. geoinformatics fce ctu 2011 15 hruby m.: designing a new raster sub-system for grass-7 01230-0 0-1 1-0 1-1 0-2 0-3 1-2 1-3 2-0 2-1 3-0 3-1 2-2 2-3 3-2 3-3 figure 2: addressing tiles in a decomposition of a raster (depth 1 and 2) architecture of the raster sub-system having defined the concept of quad-tree for rg7, we may proceed to description of the rg7 itself. the rg7 sub-systems consists of the following modules, each responsible for certain functionality: • physical module (ph) – ph implements a direct low level access to raster files. it creates a spatial index tree, reads and writes particular tiles. ph also implements a memory cache over tiles. this module has the major influence to practical run-time efficiency of rg7. • connection module (cn) – cn establishes a connection between a particular raster application and a physical module. cn implements a kernel which controls the user access to the files. cn also manage a possible multi-user (or multi-application) access to the files. • raster window module (rw) – rw implements an user specified view to a raster file. rw contains algorithms of raster resampling and presentation. rw specifies the user api in various formats, i.e. at least the full rg7 c++ api and grass standard-like c api. • raster application modules (am) – am is a set of user implemented raster analytical modules using rg7 api at a cn or rw level. the rg7 api shall connect users through the cn and rw modules. direct access to ph module is not assumed for a standard user. the direct access is allowed only for debugging and benchmarking purposes. the whole design is prepared for object-oriented (oo) implementation in c++. object oriented c++ is preferred for its high level of abstraction and support of semi-standard c++ libraries (boost, stl). to keep backward compatibility with grass-6 raster api, a certain part of rg7 api will contain headers in classical c language. there is absolutely no technical problem in combining a c and c++ code in one software package. moreover, in my personal opinion, the dogma of keeping grass in pure c might be a serious limitation in grass’es future development. geoinformatics fce ctu 2011 16 hruby m.: designing a new raster sub-system for grass-7 the physical module of rg7 definitions of the basic geometrical objects let xcoord denotes an abstract class implementing a 2d point in a plane2. let x and y are the two components of the coordinate – x representing the eastings and y representing the northings. then, let coord is defined in floating point numbers and icoord in integers. coord expresses a geographical coordinate (in any coordinate system) and icoord just a relative coordinate, mostly an index to the raster grid. region will denote a geographical rectangular space given by its left-bottom corner (lb:coord) and right-top corner (rt:coord). surely, speaking about lb and rt is equal to giving four numbers expressing north, south, east and west boundaries of the region. rasterization of the region is given by number of cells in east-west dimension – countx() and south-north dimension – county(). the region implements a method inside(coord i) returning true, resp. false, if a point i is geographically inside the region, resp. if it does not. basic definitions a physical tile a physical tile (ras_phtile in the code) is geographically defined by its region. the raster data contents is stored in a 2d array, in a matrix object, of dimension region.countx() × region.county() cells. other internal attributes are not interesting at the moment. let us see the main i/o methods of the physical tile. the functionality of ras_phtile is mainly in these methods: • wphys(icoord i, dtype v) – writes a value v to the matrix at i position. • rphys(icoord i) – returns a value at i position. • write_to_compressed_ba – the function outputs a serialised stream of matrix contents to a compressed byte array (compressed using run-length method). the compression methods are subjects for further work and thus kept simple in the current design. • read_from_compressed_ba – loads and decompresses the serialized stream to the matrix object. • allocate() and deallocate() the matrix object. the existence of data in physical tiles is only virtual and the tile’s contents is loaded just in the moment of its demand. by allocating the matrix(ras_phtile), we mean allocating a computer memory for matrix object. by deallocating we mean giving the memory back on heap. virtuality of the raster data contents the attribute matrix of ras_phtile consumes a non-trivial part of computer memory. when a raster file is opened, rg7 automatically creates its tree representation made of virtual tiles and physical tiles. but no data is loaded from the database storage yet and so, no physical tile allocates a memory for the matrix buffers. 2it might be extended with the 3rd dimension very soon. geoinformatics fce ctu 2011 17 hruby m.: designing a new raster sub-system for grass-7 ras_phtile object representing a particular tile stores the raster contents only virtually until its allocate() method is invoked. the method allocates a memory for matrix and the tile’s contents may be loaded (read_from_compressed_ba). when the tiles is not needed, its deallocate() method frees the matrix memory. clearly, before deallocating the buffer with some write changes, the tile’s contents must be stored in the database. this memory management approach is going to be described bellow in further details. the physical module in c++ classes let us now introduce the main c++ classes making the physical module of rg7 (see figure 3): • ras_phys_file – manages a spatial index of a file and physical tiles. this class is abstract in the meaning that it just manages the tiles without any direct link to their physical storage (this is done through the following ras_interface class). the ras_phys_file class manages an amount of memory used by tiles via calls allocate() and deallocate() (see section "memory management"). • ras_interface – holds metadata (an instance of ras_metadata) for a file and implements its particular data format. as it will be described bellow, the rg7 functionality may be extendible right through these interfaces. rg7 may implement interfaces for storing the data in files, sql databases or even in network services. • ras_metadata – implements a storage of metadata for a raster file. the metadata contains a lot of raster’s attributes, but at least two: region specifying the raster’s geometry and minimum_region determining the raster’s decomposition to tiles. the ras_phys_file instance represents a single opened raster file. when opening a raster file, the rg7 system instances a ras_interface relevant for the raster file (depending on its particular data format). an instance of ras_interface (simply the interface or iface) is responsible for the input and output of raster’s metadata (at least region and minimum_region) and for all i/o operations over tiles. then, an instance of ras_phys_file is created (and given the interface). having region and minimum_region, the ras_phys_file can establish the quad-tree representing the spatial-index (algorithm 1). by opening a raster, the rg7 only constructs relevant data structures in computer memory (objects ras_phys_file, ras_interface, ras_metadataand the spatial index). loading the raster contents (the map itself) depends then on user’s requirements. in multi-user (multiapplication) access, the ras_phys_file is instanced only once in the rg7 kernel. memory management over the physical tiles as it has been already mentioned, a given raster is split to a grid of physical tiles which are supposed to contain the relevant data. certainly, the whole raster will probably not fit the computer memory, therefor some sort of memory management must be designed. this memory management is going to be very similar to a well-known system of virtual memory managed by todays operating systems (virtual and physical memory pages). geoinformatics fce ctu 2011 18 hruby m.: designing a new raster sub-system for grass-7 figure 3: uml overview on the physical module classes a physical tile may be in one of two possible states (relevant for the tile’s memory consumption/reservation) – allocated or deallocated, i.e. consuming computer memory or not. when the user requests a raster attribute at some geographical coordinate, this request is translated to its logical coordinates which identify a particular tile p and local index lc. then: 1. if p is deallocated at the moment, ras_phys_file allocates a memory for p, and asks its interface (iface) to load the tile from its disk storage. then, p is allocated. 2. if p is allocated, p.rphys(lc) is returned. certainly, not all tiles can be allocated at the same moment due some operation memory limitations. for this purpose, ras_phys_file can be set to keep as maximum max_available_tiles physical tiles in allocated state (as default, max_available_tiles is set -1 and then there are no limits). as it has been mentioned, the memory management is inspired by the virtual memory management – all physical tiles contain data only virtually and the ras_phys_file object assigns the memory resources on demand, it means, in a situation when there are already max_available_tiles physical tiles allocated and another tile is required to load its data, one of the current allocated tiles must be deallocated. the algorithm doing such a decision is another problem to discuss. at the current state of rg7, the ras_phys_file object keeps certain access statistics on tiles and selects the latest accessed tile to be deallocated. geoinformatics fce ctu 2011 19 hruby m.: designing a new raster sub-system for grass-7 virtual storage interfaces one of the most valuable features of the rg7 design is in allowing developers to implement various formats of storing the rasters. ras_phys_file operates a raster file in an abstract manner invoking its interface for certain basic operations. every class derived from ras_interface implementing the following methods may define its own data format: • open() – opens an existing raster file and loads its metadata. • close() – closes all files needed by the raster layer. • create() – creates a new raster layer using the given metadata. • writemetadata() – stores the raster file’s metadata. • loadmetadata() – loads the raster file’s metadata. • swaptile(p)– if the contents of p.matrix was modified, p.matrix is serialized and flushed to disk storage (or any persistent device). invoked usually when p is selected to become deallocated or when closing the whole file. • swapped_tile_available(p)– returns a boolean saying whether a raster contents of p is present on storage or not. if p is filled with null values, there is no need to store that fact on disk. • load_swapped_tile(p)– a tile p loads its data contents from disk storage (p must be allocated before). having such an interface, one may implement any data format or a way of storing the raster data. for example, these interface definitions are going to be included in rg7 early design: • ras_interface_sqlite – the main assumed interface for rg7 via sqlite3 [4]. tile’s matrix contents is stored in an sql database in blob records. • ras_interface_postgres – similar to ras_interface_sqlite, but implemented for postgresql. postgresql works as an independent os process, thus this storage processor might be faster for heavier traffic loads than sqlite3 (especially with multiple-core cpus). this is an issue for further testing and experience. • ras_interface_grass6 – an interface providing a compatibility to current grass-6 raster storing format. the interface has to read multiple rows to complete a 2d tile, thus it can be efficient only with classical raster row-oriented applications. • ras_interface_wms – an interface providing an abstraction over wms services. wms raster files are selected as read-only then. • ras_interface_sfile – all physical tiles are stored in an unique disk file and the interface keeps an offset table in a separate disk file (similar to esri shapefile .shx). fast and rather easy to implement. efficient implementation of this physical storage level is essential for the rg7’s read/write performance. for this reason, the ras_interface_sqlite interface defines an unique database index (rasterid ×tileid) for fast searching in the db storage. geoinformatics fce ctu 2011 20 hruby m.: designing a new raster sub-system for grass-7 raster metadata in rg7 metadata are operated through the interface object using its writemetadata() and loadmetadata() methods. the loaded contents is then kept in ras_metadata objects having the following parts: • geographical description of the raster – boundary region of the raster (referred as basic region) and its size (number of cells in both directions). • user comments – various text fields inspired mostly in grass raster metadata. • tiling – minimum region (see 2). • values description – particular null value and the cell’s data type (char, two-byte integer and four-byte float). • reclassification – reclassification table and reference to an original raster file (may be generally in different format). • colour palette – reference to a standard implemented palette or an user-specified palette (just for graphical presentation). rg7 does not assume an extra raster storage to keep explicitly the null data, thus there must be one value reserved to specify the null contents of a cell. by default, the null value is set to numerical zero. let us remind that a tile containing only null values in all cells is not stored, i.e. swapped_tile_available(p) interface method should return false. if such a tile is modificated by writing some non-null data, the tile is then swapped when deallocated. the connection module of rg7 the connection module provides the interconnection between the user application (analytical module) and the internal raster kernel managing the set of active ras_phys_file. the connection module consists of two classes: • ras_kernel – the class is instanced as a singleton rg7. the rg7 object serves the users to open, create and close the requested raster files. for a required file, ras_kernel returns a particular raster object making a handle to a particular open raster. • raster – objects of this class provides a handle to open rasters, i.e. realize required read and write operations to the rasters. the object also gives the complete metadata information. the ras_kernel and raster objects are supposed to be an only channel to a particular source of raster data from the user’s perspective (to a local gis kernel, to a remote gis file server, etc.). there are some details in multi-user or multi-process accesses in the design to finish. at the moment, let us assume just one application processing the data through just one ras_kernel channel. the ras_kernel object registers all open/created raster files and assigns a ras_phys_file object for each one. multiple open request to a unique file always leads to a single ras_phys_file object. geoinformatics fce ctu 2011 21 hruby m.: designing a new raster sub-system for grass-7 as the raster objects provide the complete information access in both read/write directions (together with the window module), it in fact makes the basic necessary interface between an user and the raster gis kernel. the whole raster implementation is thus encapsulated inside this abstraction and so, the rg7 concept may serve as a new abstract api for applications even without the new tiling approach, just as an abstract layer over the classical grass raster engine. the window layer the rwindow class provides a final element of rg7 c++ api. an application may specify a window which can slide within the region of raster object. the important fact is that an user does not invoke explicitly the read/write operations – he just points the window as a text cursor and shifts that as he needs. when a window is placed (or shifted/moved), its rwindow object requests the raster object to obtain the relevant data. all data edits are made through the window as well, so if rwindow objects is asked to move at some other coordinate (or is being destroyed), rwindow object then performs the write operation automatically. rwindow object offers a basic resampling of a raster based on its internal attribute region. if the region is left default, i.e. taken from the file’s metadata (its basic region), then the window reads the raster in its original resolution and in fact, it accesses the physical raster data. furthermore, an user may specify his own region and then the region translates the relative coordinates of the window to the geographical coordinates and identifies the source cells. by default, the resample method is set to the nearest neighbor. specifying an own region is frequent in analytical tools respecting the standard grass monitor. the grass monitor settings (a global region accessible via g.region) is available by invoking ras_kernel::monitor(). the rwindow operations are these: • read(icoord c) – returns a value of a cell at c local position within the window. • write(icoord c, dtype val) – similarly, writes a value val. • topleft()/topright()/bottomleft()/bottomright() – move the window on such a position. • shift_right()/... – shifts the window one cell on right/left/down/up. • moveat(icoord c)/moveat(coord c) – points the window at a given position. there are various sorts of raster windows in the rg7 design: • rwindow – basic single-layer window with user defined size and region of resampling. • rwindowpicture – a variant of rwindow, exportable to a bitmap picture. • rwindowrow – a variant of rwindow where the size is automatically set to (1,columns) where columns follows the geometry of the given raster file. • rwindowmulti – a multi-layer window. it allows opening multiple raster files with an unique window, each file either for write or read access. geoinformatics fce ctu 2011 22 hruby m.: designing a new raster sub-system for grass-7 • rwindow3d – a window suitable for 3d raster requests. in fact, that is a multi-level rwindow defined on a single file. let us see the following demonstrative example of various access windows. rwindow r1(handle, icoord(3,3)); rwindow r2(handle, icoord(3,3), rg7.monitor()); rwindow r3(handle, icoord(1, rg7.monitor().countx()), rg7.monitor()); rwindowrow r4(handle, rg7.monitor()); rwindowmulti r5(icoord(3,3), rg7.monitor()); r5.add(handle); r5.add(rg7.open("elevation.dem"), readonly); rwindowpicture r6(handle, rg7.monitor()); the r1 window of size 3 × 3 is open for previously open raster using its natural (physical) resolution and for its natural region. similarly, r2 is created with the same window size, but defined on the global grass region specified by the g.region, i.e. including its resolution. the windows r3 a r4 are equivalent. the r5 has got two layers with rasters "geology" (previously open) and "elevation.dem". sliding r5 will load/save raster cells of both layers simultaneously. the window r6 is going to have its size specified by the current monitor settings, i.e. by its boundaries and resolution. in fact, it gives an whole picture of the raster as it might be seen in the d.mon gui window and then printed by r6.plot("out.bmp") as a raster picture. application interface to rg7 rg7 is implemented in gnu c++, it means that its code is made to be easily understable for developers, ready for enhancements and using various stl libraries wherever it is possible. over its core, various apis might be specified: • api compatible with the already existing c api standard such that absolutely no modifications to the existing raster tools will be needed. • c++ api giving the users all the new enhancements of rg7. • api connecting other programming languages. it is absolutely sure that implementing rg7 may not hurt the overall functionality of grass, i.e. of grass analytical and system modules. it is a matter of time and future experience if some modules will be re-implemented to gain a higher performance coming with a new raster storing and processing. an 3d raster support the rg7 concept is open for 3d-raster extension with only minor modifications to its basic algorithms, i/o interface and program code. the suggestion is the following: • the 3rd dimension is discretized in the regular way with a constant step. it makes a set of 2d unique raster files, where identification of a particular tile is done in two steps: geoinformatics fce ctu 2011 23 hruby m.: designing a new raster sub-system for grass-7 identification in 2d and determination of the layer in 3rd dimension. • labeling the tile is extended with a number of a required layer (i/o operations provided by the interfaces). • window may slide on an single or on all layers (3d window). experimental results this section is going to present few benchmark experiments demonstrating access times to rasters in different manners of their use. we assume the following datasets (table 1) imported from spearfish60 demonstration dataset [8]. the rasters were generated with appropriate g.region res= resolution and then exported by r.out.ascii. the data were then imported to rg7 software prototype. all benchmarks have been done based on the sqlite3 interface (see the section "virtual storage interfaces"), i.e. with rasters stored in sqlite3 database. the minimum region was set 32 × 32 cells for almost all layers (the "geolarge" file with 64x64 tiling). the benchmark was performed on a 4xcpu xeon 3ghz pc with 8 gb ram running on linux os. the benchmark measures the whole time of the application’s time run using the time unix utility, i.e. the time measured also includes some application overhead with initialization etc. it should be mentioned at the very beginning of this case study, that the experiment (called the benchmark here) does not have a quality of a proper laboratory measurement and its main purpose was not estimate the exact algorithmic complexity of rg7 access times on rasters. the purpose was rather to show the general difference between access times on grass and rg7 which will be evident and so, not very precise measurement of run times is generally acceptable. spearfish test-name original size test size num. of tiles original name (rows, columns) (imported to rg7) soils soils 750 x 950 750 x 950 256 geology geol 140 x 190 140 x 190 16 geology geolbig 140 x 190 14,000 x 19,000 65,536 geology geol05 140 x 190 28,000 x 38,000 262,144 geology geolarge 140 x 190 140,000 x 190,000 4,194,304 (tile ≥ 64x64) table 1: testing raster files imported from spearfish60 dataset. the table 1 consists of raster files taken from the spearfish dataset (spearfish original name) with their original stored resolution (original size in rows and columns). the files has been resampled and exported in grass and then given a case-study identifier (test-name) and case-study resolution (test size). when imported to rg7, each file has been automatically decomposed to the raster tiles (number of tiles). the "geolarge" file has been decomposed with 64x64 minimum region tiling, the others with the default 32x32 minimum region tiling. random physical read access in this experiment, a given raster is open and sets its "max available tiles" attribute denoting a maximum number of tiles in the cache. let us remind the term "maximum available tiles" (see 2) denoting a cache size of tiles in memory. the experiment proceeds a given number of iterations, where in each iteration: geoinformatics fce ctu 2011 24 hruby m.: designing a new raster sub-system for grass-7 1. a random coordinate c withing the raster’s region is generated, 2. a cell value at the position c is requested. if the required physical tile is not present in the cache, the tile must be loaded from its disk storage (respecting the limitation of "max available tiles"). as the access is fully random, only two experiments have a sense – an experiment with a single tile cache ("max available tiles" is one) and an experiment with a non-limited cache ("max available tiles" is "full", i.e. unlimited). we should keep in mind that, with this random access, the randomly chosen cell very surely points on a different tile than in the previous iteration. test-name max. available n. iterations time [s] avg. access time tiles per tile [ms] geol 1 105 4.4 0.044 soils 1 105 5 0.05 geolbig 1 105 8 0.08 geol05 1 105 10.3 0.10 geolbig 1 106 96 0.096 geol full 106 0.162 – soils full 106 0.253 – geolbig full 106 8.6 0.09 geol05 full 106 30.4 0.03 geolarge full 106 6 0.09 table 2: benchmark "random access" results measured on rg7 when having only one "max available tiles", almost at every iteration, the requested tile must be loaded through the given interface, i.e. from the sqlite3 database engine. the results show that obtaining any physical tile takes in average about 0.1 ms no matter what the whole size the raster file has (or, at least, the access times are in the same digit place). in other words, at the current implementation of rg7, this prototype can proceed approximately 10, 000 tile loads per second with absolutely random order of tiles. let us note, that this number will be double or triple when using burst readings (database transactions, sql selects of multiple tiles, etc.). if the cache supports an unlimited number of physical tiles (referred as "full"), almost all tiles of the input raster must be loaded during the iterations, but just once. the complete time for "geol" and "soils" is trivial, just "geolbig" in compare to "soils" shows about 6 seconds more to load all 65536 tiles (it makes aprox. 0.09 ms/tile), which is not too bad for this early implementation of rg7. see table 2 to get the experiment’s results. the experiment has been extended with a more detail sampling of the runtime of the benchmarking program for the "geol", "geolbig" and "geol05" raster files. the figure 4 shows the results when using a single-tile cache and the unlimite cache. the left graph is not very surprising – the access to a tile takes some constant time, thus the resulting function for all raster files is linear. on the right side we may see, that (see the "geolbig" file) after certain number of iterations, the cache gets filled with all tiles and further iterations does not touch the disk and the request are completed within the cache itself. geoinformatics fce ctu 2011 25 hruby m.: designing a new raster sub-system for grass-7 figure 4: benchmark "random access" results measured on rg7 with geol, geolbig and geol05 datasets. measured runtime with single tile cache (on left) and unlimited cache (on right). the figure 5 shows the average access time to a tile. let us mention mainly the effect of removing the application overhead (loading the program, establishing the quad-tree, etc.) causing the convergence of the computed average time to a certain true value. figure 5: benchmark "random access" results measured on rg7 with geol, geolbig and geol05 datasets. measured average access time to a tile with single tile cache (on left) and unlimited cache (on right). let us proceed a similar experiment done in grass. the experiment was implemented using a demonstration tool called r.example (loop with random rast_get_row(infd, inrast, random()%nrows, data_type)). the region had to be manually set regarding the current experiment, e.g. g.region rast=geology. the results at the tables 2 and 3 clearly show the following important observations: • grass is significantly faster than rg7 with 1-tile cache in small files. that’s probably because the grass kernel loads the whole file in one shot at the first read access and the further readings are then done using its internal cache. the file "geol" is also so small (around 8kb) that it takes only 2-3 pages of virtual memory and thus kept whole geoinformatics fce ctu 2011 26 hruby m.: designing a new raster sub-system for grass-7 test-name n. iterations time [s] avg. access time per row [ms] geol 105 0.5 0.005 soils 105 1.7 0.017 geolbig 105 34 0.34 geol05 105 112 1.12 geolarge 105 915 9.15 geol 106 4.5 0.0045 soils 106 17 0.017 geolbig 106 314 0.314 geol05 106 963 0.963 geolarge 106 4,176 4.176 table 3: benchmark "random access" results measured on grass in os disk buffers. • grass is significantly slower than rg7 in small files when rg7 has got an unlimited cache (soils, 0.253s versus 17s). • grass is significantly slower in large files (see the "geolbig" results) compared mainly in case of a single tile cache – 8s versus 32s (96s versus 314s). to conclude the first experiment, it must be told that the measurement is not very precise due to the influence of os disk buffers which are not under the tester’s control, but, anyway: • in the category of small files, we compare grass results with rg7 unlimited-cache results, and, rg7 wins. we assume that the raster is surely cached in grass, thus this comparison is correct. • in the category of large file, we compare grass results with rg7 single-tile results, and, rg7 wins. we assume the grass not keeping such a large file in buffers, thus, again – this comparison is correct. correlation between number of tiles and the access time one might think about the time complexity of index-sequential accessing to single raster tiles. is there any correlation between their size and expected time necessary to fetch a required tile? certainly, there is some connection, however very small as it is shown in this sub-experiment. i generated nine square raster files similar to the previous ones. these rasters are not generated from any existing raster file (like before), they are fully artificial but having all the same contents. see the table 4 for the raster file definitions and for the measured results. the experiments are basically identical to the previous bundle of runs. the benchmarking program was executed with a given raster file and set to use just single tile cache. number of iterations was set equally 1 million of iterations. the table displays the overall time of each program’s execution. as we may see, through all rasters files, the million of iterations (and loading a tile) took about the same time no matter what the size of raster actually is. geoinformatics fce ctu 2011 27 hruby m.: designing a new raster sub-system for grass-7 file id number of rows number of runtime of (columns) stored tiles the experiment [s] g1000 1,000 256 66 g2000 2,000 1,024 67 g5000 5,000 16,384 42 g10000 10,000 65,536 46 g20000 20,000 262,144 54 g50000 50,000 1,048,576 74 g70000 70,000 4,194,304 72 g200000 200,000 16,777,216 54 g300000 300,000 67,108,864 81 table 4: generated raster files and runtime measurements for the experiment in chapter 2 random window read access this experiment consists of random read accesses with a square window (3x3). the experiment is done with either 4 tiles cache (the window may intersect up to 4 tiles) and unlimited cache. surely, the practical raster analyzes do not "jump" from a random point to another, the experiment just shows a possible access time when sliding a 3x3 raster window. the results (see table 5) are not compared to grass as the measured times would be just a triple of the previously measured ones. test name max. ava-tiles readings time [s] avg. access time per window [µs] geol 4 105 6 60 soils 4 105 6 60 geolbig 4 105 8.5 85 geol full 105 0.15 1.5 soils full 105 0.2 2 geolbig full 105 4.7 47 geolbig full 106 9 90 table 5: benchmark "window access" results conclusion of the experiments the rg7 implementation seems to be more efficient in the here presented benchmarks than the classical grass raster sub-system. this paper is not a comparative study of grass and rg7 in raster processing performance. that’s another topic for another paper which might be worked out when rg7 gets more advanced and tuned. anyway, there is a big performance reserve and hope in processing various rwindow requests as the burst selects on sqlite3 database are done in shorter time than a sum of tile’s individual selects. that’s the point where rg7 might compete grass in classical row sequential analyzes as well. there is only only one point of performance difference between these two approaches where grass seems to be still faster: d.mon. the d.mon utility displays a rather small number of rows and so, it reads a small number of rows from disk. comparing to that, rg7 has to read a sequence of tiles which then complete the requested single row. the same experience geoinformatics fce ctu 2011 28 hruby m.: designing a new raster sub-system for grass-7 is probable in processing rasters in overview mode, i.e. not in their original resolution. doing such a fast overview on the data is possible via raster pyramids. conclusions and future work the rg7 design has been presented in this paper. the design is certainly in a very early developmental stage with, at the moment, no proper connection to its target infrastructure, and as it has been mentioned, grass-7 (or 8) might be the target. moreover, the rg7 implementation is currently very theoretical and needs certain optimizations to ensure high performance of the resulting raster kernel. using the virtual interfaces (see the section "virtual storage interfaces"), the rg7 library makes an abstraction over unlimited number of data storage formats and methods. the gis system based on rg7 can operate raster data sources like grass raster format, rg7 sqlite3 interface, wms, geotiff, etc. – all in the same manner. the benchmark experiments demonstrated that a random access to a raster via rg7 is faster than via classical grass raster sub-system. this is pretty sure at the moment. one may say, that this success is perhaps just marginal as a large number of practical raster analyses reads the rasters sequentially and that’s perfectly working in the current grass. however, as it was mentioned in 2, the optimized rwindowrow interface will probably defeat this argument. there might be some criticism regarding the searching time overhead in quad-tree spatial index. in fact, there are two sorts of searching trees: searching through quad-tree spatial index and searching in database index file. both are variants of tree data structures. let us mention that the practical quad-tree height is rather small, e.g. 12 levels for raster 200, 000× 200, 000 cells. time spent in searching within such a tree is really marginal. similarly, in the case of the sql database index file. i wanted to show that tiling the raster, i.e. splitting the raster to a grid of small elements can guarantee some sort of independence on raster’s size. saying constant access time would be too strong as there are several aspects in computers influencing the result, however the experiments presented here give some support to the concept of rg7. the concept of rg7 has just one weakness at the moment, and that is the time spent in generating the quad-tree when opening the raster. this is going to be fixed in the next rg7 development step. moreover, the computing performance is not the biggest issue. rg7 is attempting to show a different manner of accessing the raster data, and, i would say, a better manner than the current api provides. the idea of "windows" (or cursors?) of variable size is certainly not new. in the computer science terminology, the rasters storage and raster api make together so called abstract data type (adt). the computer scientists know for many years that a good adt ensures clever, efficient and flexible algorithms. on the other hand, a bad adt kills surely the applications. let us give a good raster adt to the future grass program generation. the rg7 design certainly counts on the standard grass-6 raster api ensuring a full backward compatibility with the existing raster analytical tools. geoinformatics fce ctu 2011 29 hruby m.: designing a new raster sub-system for grass-7 acknowledgement: this work has been supported by the grant agency of brno university of technology no. fit-s-11-1 advanced secured, reliable and adaptive it and the czech ministry of education under the research plan no. msm0021630528 "security-oriented research in information technology". this work was also supported by the european regional development fund in the it4innovations centre of excellence project(cz.1.05/1.1.00/02.0070). author would like to thank to three anonymous reviewers for their useful comments and remarks. references 1. grass homepage: http://grass.fbk.eu/ 2. grass-6 raster api manual: http://grass.osgeo.org/programming6/gisrasterlib.html 3. postgresql homepage: http://www.postgresql.org/ 4. sqlite3 homepage: http://www.sqlite.org/ 5. gdal homepage: http://www.gdal.org/ 6. postgis homepage: http://postgis.refractions.net/ 7. raster3d manual page: http://grass.osgeo.org/manuals/html70_user/raster3d. html 8. spearfish data set: http://grass.fbk.eu/download/data6.php geoinformatics fce ctu 2011 30 http://grass.fbk.eu/ http://grass.osgeo.org/programming6/gisrasterlib.html http://www.postgresql.org/ http://www.sqlite.org/ http://www.gdal.org/ http://postgis.refractions.net/ http://grass.osgeo.org/manuals/html70_user/raster3d.html http://grass.osgeo.org/manuals/html70_user/raster3d.html http://grass.fbk.eu/download/data6.php metrology in surveying and mechanical engineering editorial a new bachelor study programme has been started at the faculty of the civil engineering ctu in prague in 2014. metrology as a scientific and technical discipline is an inseparable element of the infrastructure and the economy as a discipline and affects all technical disciplines. given the importance and social benefits of metrology, the ministry of industry and trade of the concept of the national metrological system (nms) of the czech republic for the period from 2012 to 2016. the concept states that one of the essential conditions for further development of the economy is to increase competitiveness, among others. increasing labour productivity and quality of products and services. therefore, it is necessary development also in education, research, development and innovation. figure 1: metrological testing of the topcon gpt 7501 instrument this can be achieved by creating programs of basic and further education in metrology. therefore, the management of the faculty of civil engineering in prague decided to submit geoinformatics fce ctu 13, 2014 5 metrology in surveying and mechanical engineering figure 2: calibration of the laser-tracker leica at401 with use of the laserinterferometer an application for accreditation of a new four-year bachelor’s degree program metrology with study course metrology in surveying and mechanical engineering in full-time study. symbiosis of surveying and mechanical engineering is here given by natural blending of methods in the measurement of the shape and size of large components. these include measurements using basic gauges, micrometric taps, stirrup gauges, laser interferometry, coordinate machines, levelling devices, plumbing instruments, total stations, by photogrammetry or by 3d scanners. accreditation was awarded from 3. 3. 2014 to 31. 3. 2020. faculty of civil engineering and faculty of mechanical engineering in prague jointly execute teaching. the study is focused on basic principles of metrology and quality management activities. the study is based on a theoretical foundation subjects (mathematics, physics measurement, processing and analysis of measurement), on professional subjects (metrology in surveying, engineering metrology, quality management, accuracy of geometrical parameters in construcgeoinformatics fce ctu 13, 2014 6 metrology in surveying and mechanical engineering figure 3: prism on the laser-interferometer line tion), the articles of base area of application (basics of construction, mechanical engineering basics, geodesy). professional education is completed by the issue of standardization and rights. during the 5th to 7th semester there is an individual professional experience of 6 weeks. this is a practically oriented bachelor program, student is not expected to proceed on a master’s study program after completing the bachelor one. graduate has basic knowledge and skills in surveying, surveying, construction and engineering construction and technology and related legislation, is equipped with special knowledge in the field of metrology and quality management. jiří pospíšil department of special geodesy faculty of civil engineering, ctu prague geoinformatics fce ctu 13, 2014 7 geoinformatics fce ctu 13, 2014 8 využití systému galileo ve stavebním inženýrství využit́ı systému galileo ve stavebńım inženýrstv́ı leoš mervart department of advanced geodesy faculty of civil engineering, ctu in prague e-mail: mervart@fsv.cvut.cz kĺıčová slova: systém galileo, stavebńı inženýrstv́ı, výzkumný plán msm 6840770032 popis výzkumného záměru předmět a ćıl výzkumného záměru vývoj prvńıch družicových navigačńıch systémů známých též jako globálńı polohové systémy (gps) se datuje do 60. let dvacátého stolet́ı. jejich význam pro technickou praxi začal vzr̊ustat s postupným dokončováńım systému navstar (navigation satellite timing and ranging) gps v pr̊uběhu let 80. od té doby se neustále zvyšuje počet aplikaćı a gps se staly nepostradatelné v nejr̊uzněǰśıch oblastech lidské činnosti. s vývojem nových aplikaćı gps se však zároveň začala projevovat určitá omezeńı v současné době jediného plně operačńıho systému navstar gps, která jsou zp̊usobena hlavně t́ım, že systém byl p̊uvodně navržen pro potřeby armády spojených stát̊u a jeho tv̊urci nepoč́ıtali se stovkami r̊uzných civilńıch aplikaćı. z tohoto d̊uvodu se jednou z priorit stát̊u evropské unie stalo vybudováńı vlastńıho družicového navigačńıho systému, který by byl (na rozd́ıl od systému spojených stát̊u) primárně navržen pro celou řadu velmi rozmanitých civilńıch aplikaćı. v pr̊uběhu složité a dlouhé př́ıpravné fáze projektu se měnila jak jeho celková koncepce tak i název vlastńıho navigačńıho systému. obt́ıžná byla i jednáńı mezi evropskou uníı a spojenými státy, která měla zajistit tzv. interoperabilitu stávaj́ıćıho a nového navigačńıho systému. v současné době je možno konstatovat, že neexistuj́ı žádné daľśı politické překážky bráńıćı vybudováńı evropského navigačńıho systému a byla schválena i jeho detailńı koncepce. v roce 2006 přešel projekt z tzv. vývojové fáze do fáze zaváděćı. operačńı fáze by mělo být dosaženo v roce 2008. navigačńı systém evropské unie je založen na dvou projektech: projektu egnos (euro geostationary navigation overlay service). jde o společný projekt evropské kosmické agentury (esa) a evropské komise, který se bude (podle plánu z roku 2005) sestávat ze tř́ı geostacionárńıch družic. projektu galileo. systém galileo (jde opět o společný projekt esa a evropské komise) je velmi ambiciózńım projektem, který by po svém dokončeńı měl představovat nejmoderněǰśı technologii pro přesné určováńı polohy. od 1. července 2003 do konce roku 2005 byl projekt v tzv. vývojové fázi, rokem 2006 byla zahájena tzv. zaváděćı fáze a od roku 2008 má být systém již plně funkčńı. v systému se poč́ıtá s celkem 30 družicemi na třech oběžných drahách (sklon k rovńıku 56 stupň̊u, vzdálenost od země 23616 km). protože v roce 2004 (26. června) byla podepsána geinformatics fce ctu 2006 80 využití systému galileo ve stavebním inženýrství dohoda mezi usa a evropskou uníı “o podpoře, rozmı́st’ováńı a použ́ıváńı družicových navigačńıch systémů galileo a navstar gps“, otevřela se cesta k uživatelské interoperabilitě a radiofrekvenčńı kompatibilitě obou systémů. systém galileo má v současné době jednu z nejvyšš́ıch priorit ve všech členských zemı́ch eu. usneseńı vlády čr č́ıslo 218 z 23.2.2005 k organizačńımu zajǐstěńı aktivńı participace české republiky na programu galileo deklaruje připravenost české republiky k podpoře vlastńıch podnikatelských a výzkumných subjekt̊u. návrh služeb a signál̊u systému galileo podléhal dlouhé diskusi a mnoha změnám. v současné době je však zřejmé, že jeden ze signál̊u systému galileo bude ve vysokofrekvenčńım pásmu l1. tato skutečnost otev́ırá cestu k jednoduchým a levným přij́ımač̊um schopným pracovat s oběma systémy – navstar gps i galileo. zároveň t́ım otev́ırá i cestu k výraznému zvýšeńı přesnosti a zejména spolehlivosti určováńı polohy při využit́ı kombinace měřeńı obou navigačńıch systémů. a jako př́ımý d̊usledek tohoto kvalitativńıho skoku lze očekávat ohromný rozvoj nových aplikaćı včetně aplikaćı ve stavebńım inženýrstv́ı. domńıváme se, že vzhledem k výše uvedeným skutečnostem je nezbytné soustředit úsiĺı vědeckých pracovńık̊u fakulty stavebńı čvut na výzkum a vývoj aplikaćı systému galileo v oblastech geoinformatiky, krajinného a stavebńıho inženýrstv́ı. jsme přesvědčeni, že pro projekt takovéhoto výzkumu máme k dispozici kvalitńı řešitelský tým sestávaj́ıćı z odborńık̊u z oblasti družicových navigačńıch systémů, informatiky, geodézie, dopravńıho stavitelstv́ı, stavebńıch konstrukćı a daľśıch stavebńıch obor̊u. rovněž jsme přesvědčeni, že předkládaný projekt je správně věcně zaćılen, že je správně načasován (vzhledem k budováńı systému galileo), a že jeho realizace přinese i podstatné hospodářské výsledky. zamýšlený výzkumný záměr sestává ze tř́ı na sebe navazuj́ıćıch část́ı: 1. výzkum a vývoj metod pro zpracováńı signál̊u družic galileo, problematika kombinováńı měřeńı systému galileo se stávaj́ıćım globálńım polohovým systémem navstar gps, specifika použit́ı systému galileo v české republice a návaznost na současné geodetické základy české republiky. 2. výzkum a vývoj metod efektivńıho zpracováńı polohových informaćı poskytnutých systémem galileo, jejich vizualizace, tvorba databázových a informačńıch systémů (gis) založených na údaj́ıch poskytnutých systémem galileo. 3. výzkum a vývoj aplikaćı systému galileo v jednotlivých oborech stavebńıho inženýrstv́ı. � monitorováńı deformaćı mostńıch objekt̊u � sledováńı posun̊u stavebńıch objekt̊u pomoćı kombinace měřeńı systému galileo a metody laserového skenováńı � ř́ızeńı stavebńıch stroj̊u � dlouhodobé sledováńı posun̊u tramvajových a železničńıch trat́ı � prevence rizik při dopravě nebezpečných náklad̊u geinformatics fce ctu 2006 81 využití systému galileo ve stavebním inženýrství � hledáńı a vývoj nových aplikaćı družicových navigačńıch systému ve stavebnictv́ı � hledáńı a vývoj nových aplikaćı družicových navigačńıch systémů a metod dálkového pr̊uzkumu země ve vodńım hospodářstv́ı jedńım z hlavńıch ćıl̊u předkládaného výzkumného záměru je tedy komplexńı vývoj stavebńıch aplikaćı systému galileo. pod “stavebńımi aplikacemi“ rozumı́me nejr̊uzněǰśı technologické postupy, jejichž součást́ı je źıskáváńı polohových informaćı o stavebńıch objektech, ř́ızeńı stavebńıch stroj̊u, optimalizace logistických problémů při procesu výstavby atd. na jedné straně p̊ujde o aplikace, které jsou v současné době řešeny jinými technologiemi (např. terestrickými měřeńımi). v těchto př́ıpadech bude ćılem vyvinout technologické postupy založené na systému galileo, které jsou ekonomicky výhodněǰśı, přesněǰśı či méně rizikové pro pracovńıky prováděj́ıćı měřeńı. na straně druhé kvalitativńı skok v přesnosti a spolehlivosti určeńı polohy, který systém galileo přináš́ı, otevře cestu i ke zcela novým aplikaćım a technologíım, které dosavadńımi prostředky nebylo možno zajistit. tyto aplikace nelze konkrétně předv́ıdat, a proto ani vyjmenovat v návrhu výzkumného záměru, domńıváme se však, že právě jejich hledáńı a vývoj by měly představovat významnou část navrhovaného výzkumného záměru. předkládaný projekt však neńı zaměřen pouze na aplikačńı úroveň systému galileo. projekt je předkládán pracovńıky fakulty stavebńı čvut, mezi předkladateli projektu jsou ve značném počtu zastoupeni geodeti a kartografové (geodézie a kartografie je jedńım s obor̊u studovaných na stavebńı fakultě). neméně d̊uležitým ćılem projektu je proto vývoj algoritmů, metod a softwarových aplikaćı pro zpracováńı p̊uvodńıch měřeńı systému galileo a začleněńı výsledk̊u źıskaných pomoćı tohoto systému do informačńıch systémů. teprve na výsledky tohoto vývoje mohou navazovat jednotlivé aplikace ve stavebńım a krajinném inženýrstv́ı. projekt je tedy určitou syntézou výzkumu v oblasti družicové geodézie a geoinformatiky s konkrétńımi aplikacemi ve stavebńım a krajinném inženýrstv́ı. současný stav výzkumné činnosti a úrovně poznáńı v oblasti, která je předmětem výzkumného záměru, z mezinárodńıho a národńıho hlediska z části pojednávaj́ıćı o předmětu a ćıli výzkumného záměru je zřejmé, že pro dosažeńı ćıl̊u projektu jsou nezbytné poznatky z v́ıce vědńıch a technických discipĺın. zde je stručné shrnut́ı stavu výzkumné činnosti a úrovně poznáńı v oborech, které jsou pro řešeńı záměru nejd̊uležitěǰśı: družicová geodézie a navigace družicová geodézie, teorie globálńıch polohových systémů a vývoj algoritmů a softwarových nástroj̊u pro zpracováńı družicových měřeńı je jakýmsi “výchoźım bodem“ předkládaného projektu, nebot’ družicová měřeńı představuj́ı prvotńı zdroj dat, na nichž jsou zamýšlené aplikace založeny. samotná družicová geodézie nemůže existovat bez vazby na daľśı specializace spadaj́ıćı do oboru tzv. vyšš́ı (teoretické) geodézie. jejich společným úkolem je umožnit určeńı polohy statických či pohybuj́ıćıch se objekt̊u v přesně definovaném souřadnicovém a časovém systému. protože každé určeńı polohy je založeno na měřeńıch vykonávaných v konkrétńım geinformatics fce ctu 2006 82 využití systému galileo ve stavebním inženýrství fyzikálńım prostřed́ı, stává se součást́ı vyšš́ı geodézie i studium fyzikálńıch vlastnost́ı tělesa země a studium změn těchto vlastnost́ı v čase. problematice družicové navigace se pracovńıci katedry vyšš́ı geodézie fakulty stavebńı čvut věnuj́ı od počátku 90. let. vedoućı katedry, prof. mervart, je spoluautorem dvou významných softwarových systémů pro zpracováńı měřeńı navstar gps – tzv. bernského gps softwaru (ve spolupráci s astronomickým ústavem univerzity bern) a programu rtnet (real-time network) už́ıvaného japonským geografickým institutem pro monitorováńı japonské śıtě geonet (śıt’ cca 1200 permanentńıch stanic gps) . prof. mervart a ing. lukeš jsou autory či spoluautory mnoha vědeckých publikaćı s tématickou globálńıch polohových systémů. daľśı členové řešitelského týmu – prof. kostelecký, ing. vondrák, drsc. a ing. pešek jsou odborńıky v oboru geodetických polohových základ̊u, geodetické astronomie a souřadnicových a časových referenčńıch systémů. problematika referenčńıch rámc̊u je d̊uležitou součást́ı řešeńı, nebot’ jednotlivé navigačńı systémy mohou pracovat v r̊uzně definovaných a realizovaných referenčńıch rámćıch, jejichž korektńı transformace a následný převod výsledk̊u do systémů už́ıvaných v české republice je nutnou podmı́nkou použit́ı družicových navigačńıch systémů pro přesné aplikace v technické praxi. vědecká činnost v oblasti družicové geodézie neńı myslitelná bez široké mezinárodńı spolupráce. výše uvedeńı členové řešitelského týmu jsou zapojeni do mezinárodńı spolupráce v rámci bilaterálńıch smluv s našimi zahraničńımi partnery (zejména astronomickým ústavem univerzity v bernu) a v rámci mezinárodńıch vědeckých organizaćı – zejména mezinárodńı gnss služby (international gnss service) a mezinárodńı služby rotace země a referenčńıch systémů (international earth rotation and reference systems service iers). z vědeckých pracovǐst’ v české republice je naš́ım partnerem zejména výzkumný ústav geodetický, topografický a kartografický. informatika, geoinformatika, digitálńı kartografie a geografické informačńı systémy druhým piĺı̌rem zamýšleného výzkumného záměru je skupina vědńıch discipĺın, které by (pokud je chceme shrnout pod jednoslovným termı́nem) bylo možno nazvat “geoinformatikou“. tato moderńı vědńı discipĺına aplikuje poznatky informatiky – vědy o zpracováńı a manipulaci s informacemi – na potřeby geodézie, kartografie a daľśıch vědńıch a technických discipĺın zabývaj́ıćıch se měřeńım, zobrazováńım nebo (jako v př́ıpadě stavebnictv́ı) i přetvářeńım zemského povrchu. v našem pojet́ı chápeme geoinformatiku jako velmi široký pojem, který do jisté mı́ry zastřešuje kartografii, fotogrammetrii, dálkový pr̊uzkum země, mapováńı a katastr nemovitost́ı. o významu geoinformatiky svědč́ı i skutečnost, že “geoinformatika“ je i názvem a náplńı nově akreditovaného studijńıho oboru na fakultě stavebńı čvut, jehož výuka bude zahájena v akademickém roce 2006/2007. katedra mapováńı a kartografie fakulty stavebńı čvut je špičkovým pracovǐstěm v oborech spadaj́ıćıch do geoinformatiky. součást́ı katedry je laboratoř dálkového pr̊uzkumu země, která se svým výzkumem zaměřuje na několik oblast́ı. jednou z nich je sledováńı časových změn v krajině, které je možno určovat z dat dálkového pr̊uzkumu země – družicových dat (optická i radarová data) i leteckých dat (letecké měřické sńımky). v procesu vyhodnocováńı sńımk̊u je nezbytná přesná lokalizace sledovaných změn. systém galileo a j́ım poskytovaná geinformatics fce ctu 2006 83 využití systému galileo ve stavebním inženýrství data umožńı źıskáváńı dat s podstatně větš́ı polohovou přesnost́ı a snazš́ı využit́ı źıskaných poznatk̊u pro praxi. laboratoř dálkového pr̊uzkumu země se již několik let zabývá problematikou diferenciálńı interferometrie. tato metoda umožňuje źıskávat informace o změnách v poloze územı́ na zemském povrchu. podp̊urným nástrojem pro posouzeńı výsledku zpracováni diferenciálńı interferometrie je např. použit́ı gis, kde je na základě r̊uzného druhu vstupńıch dat zkoumána teoretická možnost existence poklesových oblast́ı. t́ımto zp̊usobem již byly porovnávány hodnoty dat přeb́ırané z geologických, d̊ulńıch aj. podklad̊u, které však nelze použ́ıvat jako zcela spolehlivé pro potvrzeńı nebo vyloučeńı pokles̊u. systém galileo umožńı sledovat polohu ve vybraných lokalitách pr̊uběžně v řádu několika let. tato měřeńı budou porovnávána s výsledky interferometrických vyhodnoceńı. předpokladem je tedy měřeńı systémem galileo na předem vybraných lokalitách. tato data budou pravidelně vyhodnocována, informace vkládána do gis. daľśı součást katedry mapováńı a kartografie fsv čvut je laboratoř fotogrammetrie. činnost laboratoře se v posledńıch pěti letech soustředila zejména na využit́ı pozemńı fotogrammetrie v oblasti dokumentace památkových objekt̊u, kde bylo dosaženo řady významných úspěch̊u i v mezinárodńıch projektech. vyšš́ı formy vyhodnocovaćıch systémů digitálńı fotogrammetrie využ́ıvaj́ı princip̊u virtuálńı reality a jsou zastoupeny v laboratoři na čtyřech stanićıch. nosným projektem laboratoře z dlouhodobého hlediska je práce se systémem photopa, který představuje dnes již poměrně rozsáhlou fotogrammetricko-měřickou databázi drobných památkových objekt̊u. sběr těchto dat má geoinformačńı prvky, pro lokalizaci objekt̊u se předpokládá využit́ı evropského navigačńıho systému galileo. inženýrská geodézie inženýrské geodézie je aplikaćı geodetických metod v pr̊umyslu a stavebnictv́ı. mezi hlavńı úkoly inženýrské geodézie patř́ı kompletńı geodetické zajǐstěńı staveb – od praćı při projektové části výstavby přes vytyčeńı stavby až po dokumentaci jej́ıho skutečného provedeńı a v některých př́ıpadech i dlouhodobé sledováńı jej́ıch posun̊u a deformaćı. inženýrská geodézie se vyznačuje vysokými nároky na přesnost měřeńı a také t́ım, že měřeńı jsou prováděna ve velmi obt́ıžných podmı́nkách. nasazeńı nejmoderněǰśıch př́ıstroj̊u je často jedinou cestou pro splněńı požadavk̊u na přesnost a zároveň umožňuje dodržet bezpečnost práce a výrazně sńıžit riziko pracovńıch úraz̊u. v této souvislosti je třeba zmı́nit použit́ı družicových navigačńıch systémů při vytyčováńı velkých staveb, metodu laserového skenováńı či metody automatizovaného ř́ızeńı stavebńıch stroj̊u. současný navigačńı systém navstar gps byl již úspěšně použit v některých výše zmı́něných aplikaćıch. využit́ı systému galileo by však přineslo zvýšeńı přesnosti výsledk̊u a t́ım i nahrazeńı klasických terestrických metod v aplikaćıch inženýrské geodézie s vysokými nároky právě na přesnost určeńı polohy. ještě d̊uležitěǰśı dopad nového systému galileo by byl v př́ıpadech, kdy měřeńı jsou vykonávána v nepř́ıznivých podmı́nkách (např. omezená viditelnost družic zp̊usobená zástavbou atd.) vı́ce než dvojnásobný počet družic (celkem 54 družic při současném použit́ı systému navstar gps a galileo oproti pouhým 24 družićım navstar gps) by umožnil vysoce přesná měřeńı i v těchto obt́ıžných podmı́nkách. problematikou geodetických měřeńı na stavbách se zabývá katedra speciálńı geodézie fakulty stavebńı čvut a rovněž katedra geodézie a pozemkových úprav fsv čvut. doc. blažek, vedoućı posledně zmiňované katedry, se zabývá měřeńım deformaćı most̊u optickými metodami. geinformatics fce ctu 2006 84 využití systému galileo ve stavebním inženýrství ing. štroner, phd z katedry speciálńı geodézie fsv čvut se zabývá metodou laserového skenováńı staveb. doc. hampacher je odborńıkem na matematické zpracováńı geodetických měřeńı metodami vyrovnávaćıho počtu. stavebńı mechanika, dopravńı stavby, inženýrstv́ı životńıho prostřed́ı, vodńı hospodářstv́ı a vodńı stavby katedra stavebńı mechaniky fakulty stavebńı čvut se mimo jiné dlouhodobě zabývá monitorováńım statického a dynamického chováńı významných stavebńıch konstrukćı a detekćı jejich nadměrných statických deformaćı a dynamických výchylek. v současné době je přesnost družicových měřeńı navstar gps zpravidla nižš́ı než přesnost požadovaná při výše zmı́něném monitorováńı. lze však očekávat, že po zvýšeńı přesnosti zavedeńım systému galileo bude v některých př́ıpadech možné nahradit terestrická měřeńı měřeńımi družicovými s výraznými ekonomickými úsporami a zvýšeńım bezpečnosti práce. pro posouzeńı źıskaných výsledk̊u by byla velmi cenná i skutečnost, že opakovaná měřeńı by bylo možno nahradit měřeńımi permanentńıch družicových přij́ımač̊u a eliminovat tak vliv periodických jev̊u (např. stř́ıdáńı teplot v pr̊uběhu dne či roku) na celkově zjǐstěné chováńı stavebńıch konstrukćı. katedra železničńıch staveb fsv čvut se mimo jiné zabývá sledováńım posun̊u vybraných úsek̊u tramvajových a železničńıch trat́ı – např. zkušebńıch úsek̊u s novými konstrukčńımi prvky. v současné době jsou tyto posuny sledovány terestrickými metodami. nasazeńı družicových metod by bylo velmi výhodné jak z ekonomického hlediska, tak z hlediska bezpečnosti práce. podmı́nkou je nejen vysoká přesnost měřeńı, ale předevš́ım schopnost dosáhnout této vysoké přesnosti v poměrech běžných na železničńıch a tramvajových trat́ıch – omezeńı viditelnosti družic v zářezech, zhoršeńı př́ıjmu signál̊u v d̊usledku vegetace atd. katedra zdravotńıho a ekologického inženýrstv́ı fsv čvut se zabývá modelováńım distribučńıch śıt́ı a srážkoodtokových proces̊u v urbanizovaných povod́ıch i hodnoceńım ekologického stavu vodńıch tok̊u a vodárenských nádrž́ı, pro něž jsou zapotřeb́ı přesné informace o druhu a velikosti ploch v povod́ı včetně jejich výškového zaměřeńı a ohraničeńı a přesné umı́stěńı objekt̊u, např. povrchových znak̊u vodovod̊u a kanalizaćı, které by se s výhodou daly źıskat pomoćı družicových metod. katedra ocelových a dřevěných konstrukćı fsv čvut se pod́ıĺı na řešeńı úkol̊u, které si kladou za ćıl sledovat takové ocelové a dřevěné konstrukce, u kterých je riziko nadměrných deformaćı a posun̊u. tato rizika jsou zásadńı např́ıklad pro historicky cenné věžové dřevěné konstrukce střech. u většiny konstrukćı docháźı k postupné degradaci materiálu nosné konstrukce a při extrémńıch klimatických podmı́nkách (zat́ıžeńı větrem) hroźı riziko jejich poruch a nenávratných ztrát. proto je měřeńı navrhovanou metodou při využit́ı milimetrových přesnost́ı velmi cenné a může velmi dobře identifikovat poruchu konstrukce. použit́ı družicových metod u těchto konstrukćı je velice výhodné, nebot’ se jedná o stavby zpravidla převyšuj́ıćı okolńı zástavbu. proto je jakékoliv jiné měřeńı v takovýchto podmı́nkách velmi náročné, využit́ı družicových přij́ımač̊u je naopak v tomto př́ıpadě velice výhodné nebot’ nehroźı, že by přij́ımače byly zast́ıněné. použit́ı navrhované metody by rovněž významně přispělo při měřeńıch na vysokých stožárech, věž́ıch, komı́nech a jiných obdobných konstrukćıch. u takovýchto staveb je často rozhoduj́ıćı ii. mezńı stav (mezńı stav použitelnosti). zejména se jedná o dynamické účinky větru, které je nutno sledovat jak ve směru p̊usob́ıćıho větru, tak v kolmém směru, kde obzvláště válcové stavby mohou být účinkem větru rezonančně rozkmitávány a t́ım i ohrožovány únavovým geinformatics fce ctu 2006 85 využití systému galileo ve stavebním inženýrství poškozeńım. účinky větru se velmi komplikovaně sleduj́ı ve větrných tunelech (použ́ıvaj́ı se zmenšené modely), kde se pouze simuluj́ı skutečné účinky větru. měřeńı pomoćı družicových přij́ımač̊u by poskytlo velmi ojedinělé informace, na jejichž základě by bylo možné analyzovat skutečné p̊usobeńı větru na skutečných konstrukćıch. pro rozvoj v této oblasti navrhováńı konstrukćı by měl navrhovaný projekt významný vliv. katedra geotechniky je připravena k účasti na řešeńı výzkumného záměru využit́ım znalost́ı, týkaj́ıćıch se základových poměr̊u staveb (stavu horninového prostřed́ı v podzáklad́ı staveb) a jejich základových konstrukćı, včetně geotechnických př́ıčin statických poruch. tvarové deformace stavebńıch objekt̊u mohou být vyvolány i dynamikou horninového prostřed́ı, bez přihlédnut́ı k tomu, zda jde o autonomńı projevy v masivu, nebo o interakci masivu se stavbou. spolehlivé výsledky tedy přinesou zpravidla jen komplexńı posouzeńı. katedra je materiálně i personálně vybavena ke sledováńı a hodnoceńı dynamiky horninového prostřed́ı v interakćı se stavebńımi objekty. očekávaným výsledkem spolupráce s ostatńımi zúčastněnými řešiteli je mj. źıskáńı hodnotných geodetických a geotechnických podklad̊u pro typové hodnoceńı lokalit; řešitelský kolektiv katedry geotechniky poskytne také inženýrsko-geologické a geotechnické podklady i k ostatńım projektovaným pracem. dı́lč́ı ćıle výzkumného záměru 1. systémová analýza a stanoveńı základńıch úkol̊u (sa) proces navazuje na výsledky řešitel̊u dosažené před započet́ım projektu. prob́ıhá na začátku a zároveň po celou dobu trváńı projektu, aby byla zajǐstěna aktuálnost stanovovaných d́ılč́ıch úkol̊u v souvislosti s mezinárodńım vývojem v dané problematice. systémové řešeńı projektu sestává ze základńı analýzy řešených problémů, aplikačńı analýzy a následném stanoveńı požadavk̊u na funkcionalitu a vlastnosti subsystémů, aby byla zajǐstěna vzájemná provázanost jednotlivých proces̊u. 2. výzkum, vývoj, testováńı a optimalizace výpočetńıch algoritmů zpracováńı informaćı z družic galileo (ga) proces je jedńım z hlavńıch úkol̊u zamýšleného výzkumného závěru. navazuje na předchoźı práce řešitel̊u týkaj́ıćı se vývoje programů pro zpracováńı měřeńı navstar gps v reálném čase. v rámci této části výzkumného záměru chceme � vyvinout program pro zpracováńı tzv. fázových měřeńı systému galileo � posoudit vliv korekćı systému egnos na určovanou polohu přij́ımače, navrhnout optimálńı zp̊usob využit́ı systému egnos pro zamýšlené technické aplikace � zkoumat problémy vyplývaj́ıćı z kombinace měřeńı navstar gps a galileo (rozd́ılné referenčńı systémy atd.) a připravit softwarové nástroje pro řešeńı těchto problémů v současné době je připravována nová generace přij́ımač̊u družicových signál̊u. tyto nové přij́ımače se od současných lǐśı v tom, že některé hardwarové prvky jsou nahrazovány prvky firmwarovými. přij́ımače budou vybaveny výkonnými procesory a stanou se do jisté mı́ry univerzálńımi. úpravou či změnou firmwaru bude možno použ́ıt přij́ımač k měřeńı s r̊uznými globálńımi polohovými systémy či jejich kombinacemi (navstar gps, galileo a př́ıpadně i daľśı). přij́ımače budou mı́t vlastńı operačńı systém, který umožńı spuštěńı uživatelských geinformatics fce ctu 2006 86 využití systému galileo ve stavebním inženýrství programů př́ımo v přij́ımač́ıch. otev́ırá se tak cesta k vývoji nových aplikaćı, které optimalizuj́ı zpracováńı p̊uvodńıch surových dat, řeš́ı komunikaci s ostatńımi zař́ızeńımi v rámci dané technologie atd. výzkum a vývoj takovýchto aplikaćı, které jsou na hranici firmwaru a uživatelské aplikace, bude rovněž součást́ı naš́ı práce. 3. integrace polohových geodetických základ̊u do jednotného evropského rámce (pz) ćılem je realizace evropského souřadnicového systému etrf89 na územı́ čr s centimetrovou přesnost́ı a jeho navázáńı na stávaj́ıćı polohové a výškové souřadnicové systémy (s-jtsk, s42/83, čsns) s centimetrovou přesnost́ı. pouze při dodržeńı tohoto standardu bude možné provádět lokalizaci ve všech souřadnicových systémech s požadovanou přesnost́ı. protože pohyb družic je primárně popisován zákony nebeské mechaniky v nebeském kvazi-inerciálńım nebeském systému, bude zapotřeb́ı se věnovat též daľśımu zpřesňováńı transformačńıch vztah̊u mezi t́ımto nebeským a rotuj́ıćım pozemským referenčńım systémem (precese, nutace, světový čas, pohyb pólu). některé z parametr̊u orientace země (precese, nutace, světový čas) přitom neńı možné źıskat pouze z pozorováńı družic, je potřeba je dále kombinovat s pozorováńım vlbi; výzkum bude proto zaměřen též na techniku těchto kombinaćı. pro źıskáńı integrovaných polohových základ̊u je nutno mimo jiné provést daľśı zpřesněńı pr̊uběhu plochy kvazigeoidu na územı́ čr zpracováńım heterogenńıch dat, provést testováńı jeho přesnosti metodou “gps-nivelace“, vytvořit algoritmus transformace etrf89 – s-jtsk, resp. etrf89 – s42/83 a provést jeho softwarovou realizaci s ohledem na jeho začleněńı do gis softwar̊u 4. aplikace metod moderńı numerické matematiky (nm) nalezeńı efektivńıch a numericky stabilńıch metod řešeńı matematických problémů, které se vyskytuj́ı při zpracováńı měřeńı družicových navigačńıch systémů a při zpracováńı velmi rozsáhlých sobor̊u geoinformaćı. v návrhu výzkumného záměru nelze přesně odhadnout, které matematické problémy se při řešeńı vyskytnou. obecně lze očekávat, že bude nutno řešit problémy při řešeńı velkých soustav lineárńıch rovnic (při zpracováńı měřeńı družicových navigačńıch systémů se mohou vyskytnout soustavy s počtem neznámých v řádu 100 tiśıc). požadavkem řešeńı je nejen vysoké numerická stabilita, ale i rychlost. optimalizaci algoritmů lze dosáhnout t́ım, že využijeme určitých apriorńıch znalost́ı o struktuře systémů rovnic (např. jde o tzv. “ř́ıdké“ systémy rovnic nebo o systémy, v nichž některé neznámé jsou silně závislé atd.) při zpracováńı družicových měřeńı v reálném čase se nejčastěji (z d̊uvod̊u efektivity výpočtu) použ́ıvaj́ı r̊uzné modifikace tzv. kalmanova filtru. obecně lze ř́ıci, že tato filtrace může být numericky velmi nestabilńı. optimalizace filtračńıch algoritmů z hlediska numerické stability je daľśım z úkol̊u této části projektu. 5. monitorováńı deformaćı mostńıch konstrukćı (mk) tuto oblast považujeme za jednu z nejslibněǰśıch aplikaćı družicových měřeńı ve stavebnictv́ı. úkolem projektu je ověřeńı využitelnosti nového evropského navigačńıho systému galileo pro monitorováńı statického a dynamického chováńı stavebńıch konstrukćı a pro detekci jejich nadměrných statických deformaćı a dynamických výchylek. z hlediska použitelnosti, provozuschopnosti, dlouhodobé spolehlivosti a životnosti předpjatých mostńıch konstrukćı větš́ıch rozpět́ı je vysoce aktuálńı otázka trvalého r̊ustu deformaćı v čase. společenský význam problémů provozuschopnosti most̊u je mimořádný a jejich d̊usledky vyvolané náklady v ekonomickém pohledu mnohonásobně převyšuj́ı náklady vyvolané poruchami geinformatics fce ctu 2006 87 využití systému galileo ve stavebním inženýrství statického charakteru. zkušenosti ukazuj́ı na větš́ı hodnoty skutečných pr̊uhyb̊u oproti výpočtové predikci a na jejich dlouhodobý nár̊ust v čase skutečné dlouhodobé pr̊uhyby jsou větš́ı než pr̊uhyby stanovené dosavadńımi výpočty. př́ıčin tohoto jevu je celá řada a je třeba je objektivně analyzovat. jedńım (nikoliv však jediným) z významných faktor̊u ovlivňuj́ıćıch vývoj pr̊uhyb̊u je dotvarováńı a diferenčńı smršt’ováńı betonu. vzhledem k tomu, že jde o velmi složité jevy zahrnuj́ıćı interakci řady faktor̊u na r̊uzných úrovńıch mikrostruktury, které jsou ovlivňovány mnoha proměnnými účinky, je matematické vyjádřeńı vývoje těchto jev̊u nutně dosti složité. řešitelský tým katedry betonových konstrukćı a most̊u fsv čvut již přinesl zcela konkrétńı výsledky charakteru jak p̊uvodńıch př́ınos̊u v oblasti teorie stavebńıch konstrukćı, tak i podpory pro praktické projektováńı a sledováńı vývoje přetvořeńı velkých mostńıch staveb. byly vytvořeny nové matematické modely a metodika výpočt̊u, zobecněny výsledky a zpracována praktická doporučeńı která nacházej́ı uplatněńı jakožto účinný nástroj pro spolehlivý a hospodárný návrh konstrukćı, pro dosažeńı úspor konstrukčńıch materiál̊u, energie a finančńıch náklad̊u, a to nejen na výstavbu, ale i na údržbu, opravy a rekonstrukce. pro kalibraci teoretických predikćı maj́ı informace o skutečném pr̊uběhu nár̊ustu deformaćı mostńıch konstrukce z předpjatého betonu zcela zásadńı význam. lze je využ́ıt jak k posuzováńı stavu sledované mostńı konstrukce, tak i k ověřeńı výstižnosti matematických model̊u predikce dotvarováńı. př́ıklad: na mostu na dálnici d8 přes ohři u doksan je dlouhodobě sledován vývoj pr̊uhyb̊u. v nejdeľśım poli, které měř́ı cca. 130 m, byl za 5 let zjǐstěn nár̊ust trvalého pr̊uhybu uprostřed rozpět́ı o cca. 3 cm. při takto velkých deformaćıch je možné uvažovat o jejich měřeńı metodami družicové geodézie, u nichž lze očekávat oproti běžným metodám nejen srovnatelnou či vyšš́ı přesnost, ale zejména možnost monitorováńı v pr̊uběhu celého dlouhodobého měřeńı. zat́ıžeńı změnou teploty představuje u významných stavebńıch konstrukćı podstatnou složku jejich celkového namáháńı. informace o deformaćıch konstrukce zp̊usobené změnou jejich teploty, které by byly źıskány pomoćı systému galileo, doplněné o měřeńı změn teploty, které deformace vyvolaly, lze využ́ıt k upřesňováńı poznatk̊u o podkladech, výpočtových modelech a postupech pro výpočet a posuzováńı vlivu změn teploty na spolehlivost zkoumaných konstrukćı. př́ıklad: v současné době je na několika mostech v čr soustavně sledována změna jejich teploty za účelem ověřeńı hodnot, které jsou předepsány v převzaté evropské normě en 1991-1-5. katedra stavebńı mechaniky fsv čvut sleduje změny teploty mostu z předpjatého betonu přes sedlický potok na dálnici d1 v řezu lež́ıćım uprostřed jeho nejdeľśıho pole, jehož rozpět́ı je 75 m. pr̊uhyb od změny teploty dosahuje cca. 1 cm. z výše uvedených skutečnost́ı je zřejmé, že systém galileo by mohl být použit pro : � monitorováńı nár̊ustu trvalých pr̊uhyb̊u významných mostńıch konstrukćı z předpjatého betonu. � monitorováńı nár̊ustu poddajnosti významných stavebńıch konstrukćı zp̊usobené postupnou degradaćı jejich nosné konstrukce. geinformatics fce ctu 2006 88 využití systému galileo ve stavebním inženýrství � monitorováńı kvazistatických deformaćı významných stavebńıch konstrukćı vyvolaných změnou jejich teploty. � monitorováńı změn základńıch vlastńıch frekvenćı významných stavebńıch konstrukćı v reálném čase zp̊usobené postupnou degradaćı jejich nosné konstrukce. � detekce nadměrných statických deformaćı sledované konstrukce vlivem extrémńıho statického nahodilého zat́ıžeńı (např. sněhem). � detekce nadměrných dynamických výchylek sledované konstrukce vlivem extrémńıho dynamického nahodilého zat́ıžeńı (např. rozkmitáńı sledované konstrukce extrémńımi účinky větru, rozkmitáńı lávky pro pěš́ı vandaly) nebo ztrátou aerodynamické stability. v prvńı fázi se předpokládá porovnáńı výsledk̊u měřeńı źıskaných systémem galileo s výsledky měřeńı provedeného klasickým zp̊usobem (řešitelské pracovǐstě: katedra geodézie a pozemkových úprav fsv čvut). po ověřeńı výsledk̊u by měl být možný přechod od klasických měřeńı k měřeńı družicovým s výhodami v ekonomičnosti a bezpečnosti práce a spolehlivosti výsledk̊u. součást́ı úkolu je rovněž vývoj metod pro matematicko-statistické zpracováńı źıskaných dat a programové zajǐstěńı výpočetńıch praćı. 6. sledováńı poruch a nadměrných deformaćı významných historických pozemńıch objekt̊u a staveb, sledováńı výchylek a deformaćı vysokých stožár̊u, věž́ı a komı́n̊u (hk) ćılem této části projektu je ověřeńı využitelnosti nového evropského navigačńıho systému galileo pro monitorováńı statického a dynamického chováńı významných historických stavebńıch konstrukćı a pro detekci jejich nadměrných statických deformaćı a dynamických výchylek, které zpravidla signalizuj́ı poruchu v nosné konstrukci. rizika poruch a poškozeńı jsou zásadńı např́ıklad pro historicky cenné věžové dřevěné konstrukce střech. u většiny dřevěných konstrukćı docháźı k postupné degradaci materiálu nosné konstrukce (p̊usobeńı vlhkosti, dřevokazných šk̊udc̊u atd.) a při extrémńıch klimatických podmı́nkách (u věžových staveb zejména zat́ıžeńı větrem) hroźı riziko jejich poruch. proto by bylo využit́ı družicových přij́ımač̊u s milimetrovými přesnostmi velmi cenné a může velmi dobře identifikovat poruchu konstrukce a poskytnout odpověd’ na otázku, zda je nutné přistoupit k sanaci (ześıleńı) nosné konstrukce. použit́ı družicových metod u těchto konstrukćı je velice výhodné, nebot’ se jedná o stavby zpravidla převyšuj́ıćı okolńı terén a okolńı stavby. jakékoliv jiné měřeńı je v takovýchto podmı́nkách velmi náročné a drahé, využit́ı družicových přij́ımač̊u se jev́ı naopak v tomto př́ıpadě velice výhodné nebot’ nehroźı, že by přij́ımače byly zast́ıněné, neńı potřeba stálá obsluha, měřeńı lze velmi snadno omezit pouze na určité časové obdob́ı, ve kterém je konstrukce vystavena extrémńım účink̊um. družicová měřeńı jsou jednou z mála možnost́ı, jak ověřit chováńı vysokých stožár̊u, věž́ı, komı́n̊u a jiných obdobných konstrukćıch při p̊usobeńı větru. u těchto staveb je zpravidla rozhoduj́ıćı mezńı stav použitelnosti (tzv. ii. mezńı stav). jedná se zejména o dynamické účinky větru, které je nutno sledovat jak ve směru p̊usob́ıćıho větru, tak v kolmém směru, kde obzvláště válcové stavby mohou být účinkem větru rezonančně rozkmitávány a t́ım i ohrožovány únavovým poškozeńım. účinky větru se velmi komplikovaně sleduj́ı na zmenšených modelech ve větrných tunelech, kde se simuluj́ı skutečné účinky větru. měřeńı pomoćı přij́ımač̊u galileo geinformatics fce ctu 2006 89 využití systému galileo ve stavebním inženýrství by poskytlo velmi ojedinělé informace, na jejichž základě by bylo možné analyzovat skutečné p̊usobeńı větru na skutečných konstrukćıch. vzhledem k proporćım těchto objekt̊u je jejich poměr výšky a š́ı̌rky velmi nevýhodný pro ověřováńı ve větrných tunelech, sledováńı výše zmı́něných veličin na skutečných objektech by znamenalo výrazný posun v poznáńı v tomto oboru navrhováńı konstrukćı. daľśım př́ınosem by bylo prováděńı měřeńı na takových konstrukćıch, které již nesou poměrně značné množstv́ı anténńıch systémů a u kterých je tendence jejich počty a rozměry dále zvyšovat. s t́ımto stavem se budeme setkávat stále častěji, nebot’ jde o doprovodný jev spojený s rozvojem komunikačńıch śıt́ı. výškových budov a konstrukćı je omezený počet, v zastavěných lokalitách již dále neńı možné stavět nové výškové stavby a konstrukce a proto docháźı k nár̊ust̊um ploch anténńıch systémů na stávaj́ıćıch konstrukćıch. tento r̊ust má však svoje hranice a rizika z poruch při extrémńıch povětrnostńıch podmı́nkách stoupaj́ı. proto by bylo možné na vybraných konstrukćıch nejprve sledovat jejich chováńı při současném stavu a po té, co se počet či plocha anténńıch systémů navýš́ı. při významném nár̊ustu deformaćı bude možné rozhodnout, zda technologie neohrožuje bezpečnost a stabilitu konstrukce a popř́ıpadě jak velké přit́ıžeńı od vodorovných zat́ıžeńı bude ještě pro konstrukci př́ıpustné z hlediska zachováńı spolehlivosti celého systému. 7. aplikace systému galileo v krajinném a vodohospodářském inženýrstv́ı (vi) řešitelská pracovǐstě (katedra hydromelioraćı a krajinného inženýrstv́ı fsv čvut a katedra zdravotńıho a ekologického inženýrstv́ı) se budou zabývat vývojem expertńıch a varovných systémů pracuj́ıćıch pro podporu rozhodováńı v reálném čase. půjde předevš́ım o následuj́ıćı oblasti: � vodńı hospodářstv́ı – sledováńı kritických situaćı jako jsou havárie, povodně, apod. � krajinné inženýrstv́ı – mapováńı cenných prvk̊u v krajině a jejich sledováńı, ochrana př́ırody a krajiny � doprava a jej́ı dopad na životńı prostřed́ı přesná měřeńı pomoćı systému galileo budou sloužit ke zpřesňováńı polohových a výškových informaćı povrchových znak̊u a k přesnému umı́stěńı dat dálkového pr̊uzkumu země. možné aplikace jsou např. v následuj́ıćıch oblastech: � přesné vytyčeńı povrchových znak̊u vodovod̊u a kanalizaćı v souřadnićıch x,y,z umožňuj́ıćı rychlý př́ıstup k ovládaćım prvk̊um a šachtám pro účely provozu a rychlého zásahu při haváríıch (digitalizace mapových podklad̊u neposkytuje výškovou souřadnici, geodetická měřeńı jsou náročná a nákladná, prvky jsou často obt́ıžně př́ıstupné, zejména kv̊uli husté dopravě) � informace pro srážkoodtokové modely kvantity a kvality vody (typy a velikosti ploch v územı́, jejich sklony), � podklady pro hodnoceńı ekologického stavu vodńıch tok̊u a vodárenských nádrž́ı (jejich zaměřeńı, sledováńı změn koryt, pobřežńıch zón a pokryvu povod́ı, koĺısáńı hladin), � sledováńı haváríı ovlivňuj́ıćıch vodńı zdroje, postup eutrofizace nádrž́ı, � sledováńı pokles̊u inženýrských śıt́ı v d̊usledku poklesu povrchu d̊ulńı činnost́ı, apod. geinformatics fce ctu 2006 90 využití systému galileo ve stavebním inženýrství z obou řešitelských pracovǐst’ se předpokládaj́ı jednak výstupy v teoretické rovině – studie proveditelnosti, návrhy aplikaćı a posouzeńı technických nárok̊u systému a ve spolupráci s ostatńımi pracovńımi skupinami definováńı nutných technických požadavk̊u na systém, jednak výstupu v formě demonstrace jednotlivých aplikaćı 8. aplikace systému galileo v silničńım stavitelstv́ı (si) přesná lokace v čase a mı́stě se stává nezbytnou součást́ı silničńı dopravy a silničńıho stavitelstv́ı. aplikace existuj́ı např. v ř́ızeńı a regulaci nákladńıch vozidel, snadněǰśıch kontrolách při přet́ıžeńı, zpoplatněńı dálničńıch a rychlostńıch komunikaćı atd. jedńım z d̊uležitých úkol̊u je sledováńı přet́ıžených vozidel a nadměrných a nebezpečných náklad̊u během přepravy. nový systém galileo se též jev́ı výhodný pro řešeńı problematiky oprav a údržby vozovek. jde např. o optimalizaci odstraňováńı sněhu v zimńıch měśıćıch, diagnostiku aktuálńıho stavu a poruch vozovek, včetně měřeńı jejich proměnných a neproměnných parametr̊u. řešitelské pracovǐstě (katedra silničńıch staveb fsv čvut) se bude zabývat vývojem expertńıho systému, který by komplexně řešil výše uvedenou problematiku s využit́ım sytému galileo jako základńıho zdroje prostorových dat. pozornost bude věnována prob́ıhaj́ıćım projekt̊um gps v kanadě a jihovýchodńı asii. systém fsv však bude využ́ıvat i ty vlastnosti nových navigačńıch systémů, kterými se tyto lǐśı od systémů stávaj́ıćıch. jde předevš́ım o možnost komunikace mezi uživatelem (např. vozidlem) při zjǐstěńı krizové situace a expertńım systémem zajǐst’uj́ıćım reakci na vzniklou situaci. 9. aplikace systému galileo v železničńım stavitelstv́ı (žs) řešitelské pracovǐstě (katedra železničńıch staveb fsv čvut) se bude zabývat vývojem aplikaćı systému galileo pro dlouhodobé sledováńı prostorových posun̊u vybraných úsek̊u tramvajových a železničńıch trat́ı. půjde v tomto př́ıpadě o nahrazeńı technologíı založených na terestrických měřeńıch technologíı založenou na měřeńıch družicových s výhodou vyšš́ı ekonomičnosti a bezpečnosti práce. sledováńı posun̊u trat́ı je nezbytné v př́ıpadech, kdy jde o tratě s novými konstrukčńımi prvky, vysokorychlostńı tratě a tratě ohrožené např. svahovými pohyby v d̊usledku lidské činnosti (tzv. technogenńı pohyby např. v d̊usledku poddolováńı apod.) ćılem automatického sledováńı je zajǐstěńı vyšš́ı bezpečnosti provozu kolejových vozidel. 10. kombinace metody laserového skenováńı a systému galileo (ls) řešitelské pracovǐstě (katedra speciálńı geodézie fsv čvut) se zabývá metodami sledováńı a dokumentace staveb tzv. laserovými skenery. jde o velmi efektivńı a moderńı metodu, která umožňuje hromadný sběr prostorových dat a jejich následné poč́ıtačové zpracováńı a vizualizaci. nejd̊uležitěǰśı použitý př́ıstroj – laserový skener – pracuje v lokálńım souřadném systému a výsledky měřeńı je třeba transformovat do souřadnicového systému mapového podkladu, národńıho referenčńıho systému nebo obecně do zvoleného dobře definovaného systému souřadnic. pro efektivńı a přesné provedeńı této transformace se jev́ı jako velmi vhodné vyvinout technologie kombinace laserového skenováńı s určováńım polohy opěrných bod̊u metodami družicové geodézie s využit́ım systému galileo. navržený úkol by zahrnoval � výzkum možnost́ı sběru dat laserovými skenery pro dokumentaci staveb, � výzkum možnost́ı sběru dat laserovými skenery pro analýzu změn stavebńıch konstrukćı a daľśıch objekt̊u, geinformatics fce ctu 2006 91 využití systému galileo ve stavebním inženýrství � zpracováńı, vizualizace a prezentace dat poř́ızených laserovými skenery, � výzkum možnost́ı propojeńı měřeńı laserovými skenery s družicovými daty systému galileo, � porovnáńı a skloubeńı terestrických geodetických technologíı a moderńıch navigačńıch metod se zaměřeńım na evropský navigačńı systém galileo, � výzkum možnost́ı źıskáváńı podklad̊u pro rozhodováńı podnikového facility managementu, � využit́ı źıskaných 3d model̊u v městských informačńıch systémech (mis) 11. aplikace systému galileo v experimentálńı geotechnice (eg) při geotechnickém pr̊uzkumu je posuzován stav horninového či zeminového prostřed́ı a jeho možný vliv na stavbu, nebo na již existuj́ıćı objekty. častým úkolem je vyhledáváńı diskontinuit, oslabených zón, dutin a daľśıch podzemńıch nehomogenit. to vše poskytuje pr̊uzkum pomoćı georadaru, nejlépe v kombinaci s daľśımi metodami (mělkou refrakčńı seismikou, mikrogravimetríı). využit́ı georadaru umožňuje źıskat neocenitelné informace o geologické stavbě v okoĺı trasy stavby, jako např. o hloubce a reliéfu skalńıho podlož́ı, litologických změnách, výskytu poruchových zón apod. umožňuje lokalizaci inženýrských śıt́ı a podzemńıch objekt̊u. na základě výsledk̊u pr̊uzkumu pomoćı georadaru lze optimalizovat trasu stavby, zatř́ıdit zemńı práce, stanovit nejvhodněǰśı technologii, stanovit rozsah trhaćıch praćı, posoudit stabilitu územı́ či př́ıpadný vliv stavby na okoĺı. georadar má své uplatněńı jak během výstavby tak po jej́ım skončeńı (např. při posuzováńı kvality zhutněńı). nespornou výhodou je źıskáńı vysoké hustoty měřených dat. zat́ımco vzdálenost mezi pr̊uzkumnými vrty čińı často několik set metr̊u, krok měřeńı při použit́ı georadaru se pohybuje v řádu decimetr̊u. využit́ı georadaru ve spojeńı se systémem galileo umožńı źıskávat přesně lokalizované výstupy tohoto pr̊uzkumu navázáńım na souřadný systém. bude možné vytvořit 3d automatizovanou kontinuálńı databázi podlož́ı rozsáhlých územı́ spojeńım nedestruktivńıho zp̊usobu geotechnického pr̊uzkumu se systémem galileo. 12. hodnoceńı rizika přetvořeńı ve spolup̊usobeńı stavebńı konstrukce a horninového maśıvu (gt) ćılem této části projektu je sledováńı historických staveb (hrady, zámky – kunětická hora, valdštejn), skládek, výsypek (palivový kombinát úst́ı nad labem rabenov), hald, poddolovaných územı́ (vědecko výzkumná podzemńı laboratoř, štola josef, mokrsko), svahových deformaćı (čertovka – úst́ı n.l., vaňov), popř́ıpadě dopravńıch staveb (dálničńı stavby), hydrotechnických (sypané hráze apod.), pr̊umyslových staveb a inženýrských konstrukćı (mosty) z hlediska př́ırodńıch a antropogenńıch deformaćı horninového maśıvu. po arch́ıvńı studii, by byly vybrány citlivé lokality z hlediska existuj́ıćıch pohyb̊u (svahové deformace, eroze, povodňové transporty a akumulace, senzitivńı zeminy, lidský zásah). po zúžeńı výběru na počet objekt̊u, který je ve výzkumném záměru časově a personálně zpracovatelný, budou osazeny měřičské prvky a bude prováděno režimńı sledováńı, které bude vztaženo k měřeńım prováděným na horninovém maśıvu, př́ıpadně v podzemı́ a ve vrtech. závěrem a výsledkem bude hodnoceńı možnosti podobných měřeńı a sledováńı na stavebńıch objektech. geinformatics fce ctu 2006 92 využití systému galileo ve stavebním inženýrství 13. vývoj webových služeb pro geoinformatiku (ws) ćılem procesu je vývoj obecného objektového systému pro geoinformatiku s ćılem sjednotit rozličné geoinformačńı služby tak, aby př́ıstup k nim byl homogenńı. rozhrańı každé služby bude popsáno pomoćı xml v jazyce wsdl. pro uživatele s registrovaným účtem bude možné přistupovat k výsledk̊um zadaných úloh, vlastńı geoinformačńı systém bude napsán v jazyce java s využit́ım servlet̊u a databázového ovladače jdbc. oprávněńı uživatelé budou moci registrovat v systému vlastńı úlohy. 14. referencováńı státńıch mapových děl velkých a středńıch měř́ıtek pomoćı systému galileo (ma) využit́ı měřeńı družicového navigačńıho systému je nemyslitelné bez začleněńı výsledk̊u do stávaj́ıćıch mapových podklad̊u. na územı́ čr se vyskytuje řada státńıch mapových děl celkem ve čtyřech souřadnicových systémech a třech kartografických projekćıch, s neobyčejně pestrou paletou systémů kladu a značeńı mapových list̊u. pro potřebu digitálńı kartografie, výměnu kartografických dat se zahranič́ım a obecně všechny moderńı technické aplikace je třeba zavést bezešvé mapy (tj. mapa jako celek nikoli soubor mapových list̊u) v zobrazeńı a souřadnicovém systému podle volby uživatele. problémy vznikaj́ı např. proto, že podkladové mapy jsou źıskány skenováńım a soused́ıćı listy nemaj́ı vyrovnané styky a nav́ıc maj́ı nepravidelnou srážku. řešitelské pracovǐstě (katedra mapováńı a kartografie fsv čvut) se bude zabývat vývojem nástroj̊u pro převod existuj́ıćıch mapových podklad̊u do jednotného (elektronického) formátu tak, aby je bylo možno použ́ıvat společně s družicovým navigačńım systémem galileo. 15. aplikace systému galileo pro zvýšeńı efektivity vedeńı katastru nemovitost́ı (kn) technické aplikace navigačńıho sytému ve stavebńım inženýrstv́ı záviśı na bezchybné funkci nového informačńıho systému katastru nemovitost́ı (iskn), a na rychlém poskytováńı aktuálńıch dat pro jeho grafickou část (sgi). v této souvislosti bude řešena problematika tvorby rychlých výstup̊u v podobě tématických map pro krizový management v době živelńıch katastrof nebo obdobných situaćı. lokalizaci vybraných prvk̊u v územı́ pomoćı systému galileo bude možno vhodně kombinovat s obrazovými daty (barevná ortofota) určenými pro zjǐst’ováńı a sledováńı využ́ıváńı pozemk̊u systémem iacs, podle požadavk̊u a pravidel eu pro poskytováńı dotaćı pro zemědělskou výrobu. 16. využit́ı systému galileo ve fotogrammetrii (fg) laboratoř fotogrammetrie se zabývá řadou úkol̊u, které jsou vázány na přesné určeńı polohy. činnost laboratoře je dlouhodobě zaměřena na dokumentaci a prezentaci památkových objekt̊u s ćılem vytvořeńı rozsáhlé virtuálńı databáze památkových objekt̊u v prostřed́ı śıtě internet. prvńı úspěšné pokusy v této oblasti byly již u nás provedeny http://lfgm.fsv.cvut.cz, funkčńı je prototyp databáze malých památkových objekt̊u photopa. ten by měl být doplněn zejména o možnosti animaćı a virtuálńıch rekonstrukćı objekt̊u a měl by být zpř́ıstupněn do formy zjednodušené databáze široké veřejnosti pro virtuálńı prohĺıdky a turistiku. vzhledem k ukládáńı rozsáhlého množstv́ı dat do prostřed́ı gis a jejich lokalizovatelnost je nutno pro každý objekt určit definičńı bod nebo body. dosud jsme použ́ıvali pro tuto činnost odměřeńı polohy z podrobné mapy nebo turistickou gps. daľśım předpokládaným stupněm systému bude vizualizace objektu na základě jeho polohy v mapě. zde je možnost systému galileo ve geinformatics fce ctu 2006 93 http://lfgm.fsv.cvut.cz využití systému galileo ve stavebním inženýrství spojeńı s digitálńı mapou. obdobný př́ıstup lze využ́ıt při dlouhodobém pr̊uzkumu geoglyf̊u a petroglyf̊u v perú, který ve spolupráci s htw dresden a inc peru prob́ıhá již několik let. zde jde o využit́ı metod přesných gps pro dokumentačńı práce v extrémńıch podmı́nkách pouště a vysokých hor. daľśı výzkumná činnost předpokládá vytvořeńı systému pro v́ıceúčelovou navigace po kulturńıch památkách. zde by bylo možno použ́ıt digitálńı mapový podklad a doplnit ho daľśımi užitečnými informacemi. laserové skenováńı se stalo v posledńıch několika letech novou technologíı dokumentace a hromadného sběru tř́ırozměrných bod̊u z okoĺı. zároveň se objevily také dynamické metody, využ́ıvaj́ıćı systému gps doplněného inerciálńım navigačńım systémem. systémy s inerciálńı jednotkou patř́ı dnes ke špičkovým technickým zař́ızeńım a využ́ıvaj́ı se v leteckých aplikaćıch. pozemńı dynamické systémy jsou zat́ım ve vývoji. ćılem výzkumu v tomto př́ıpadě bude vytvořeńı dynamického systému pro zaměřováńı pozemńıch liniových oblast́ı umı́stěný na automobilu nebo železničńım vozidle. jádrem systému bude přij́ımač galileo a laserová měř́ıćı hlava. předpokládaným výsledkem měřeńı by byl 3d model bĺızkého okoĺı proj́ıžděného úseku s v́ıceúčelovým využit́ım. geinformatics fce ctu 2006 94 geoinformatics fce ctu 12, 2014 28 absolute baseline for testing of electronic distance meters jaroslav braun, filip dvo�á�ek, martin štroner czech technical university in prague, faculty of civil engineering, department of special geodesy, thákurova 7, 16629 praha 6, czech republic, e-mail: jaroslav.braun@fsv.cvut.cz, filip.dvoracek@fsv.cvut.cz, martin.stroner@fsv.cvut.cz abstract the paper deals with the construction and determination of coordinates of the absolute edms baseline in a laboratory with 16 pillars with forced centring. leica absolute tracker at401 (standard deviation of distance measurement: 5 µm, standard deviation of angle measurement: 0.15 mgon), which is designed for very accurate industrial measurements, was used for our purpose. lengths between the baseline points were determined with a standard deviation of 0.02 mm. the baseline is used for determining systematic and random errors of distance meters and for accuracy of distance meters at short distances common in engineering surveying for purposes of mechanical engineering. key words: absolute tracker, edm baseline, pillars, forced-centring plates, accurate lengths. 1. introduction all surveying instruments and their measurements suffer from some errors. to refine the measurement results, it is necessary to use procedures restricting influence of the instrument errors on the measured values or to implement numerical corrections. to determine the magnitude of the errors and standard deviations of measurements of distance meters of total stations edm (electronic distance meter) baselines are used. calibration of distance meters at the baselines are carried out as a simple regular checks [1], further as legal metrological control of measurement or for error detection and more accurate results [2]. the edm baselines are realized as outdoor or laboratory. outdoor baselines are usually made up of pillars with forced centring and lengths of more than 1 km [3, 4]. lengths on these baselines are determined with an accuracy of 0.5 mm 4.0 mm (when new determinations 0.3 ppm � d or less). distance meters are tested directly by comparing the measured distance and the reference distance of forced centring. these baselines are used mostly for routine calibration of distance meters used in common practice. laboratory edm baselines are used for accurate experimental measurements [5] or for calibration of total stations used in engineering. the lengths of the baselines are 20 m 50 m. classical laboratory baseline consists of a rail and the interferometer (standard deviation of the measured distance by the interferometer is 1.5 ppm � d, [5]) and is determined the difference of the relative distance measured by the interferometer and by the total station. edm baseline combining both types of baselines, having stabilized points with forced centring and with distances determined with extra high precision was realized in geodetic laboratory at the faculty of civil engineering ctu in prague. the 16 concrete pillars with forced centring were set up and the absolute distances between the points were determined with a standard deviation of 0.02 millimetre using leica absolute tracker at401. the baseline was built for testing of the distance meters of total stations, the actual lengths between the pillars ("true values") are compared with the measured lengths. the aim of the tests is to identify random and systematic errors of measured distances and determine possible methods of its correction for usage in engineering (indoor) measurements. braun, j. et al: absolute baseline for testing of electronic distance meters geoinformatics fce ctu 12, 2014 29 2. laboratory edm baseline the baseline is located in 4 basement lab rooms in building c of the faculty of civil engineering ctu in prague (figure 1). the baseline is made up of 16 concrete pillars in a single row (figure 2). heights of the pillars are 0.9 m, the size of their square heads is 0.35 m x 0.35 m. neighbouring pillars are mutually spaced at a distance of 0.9 m – 5.0 m and the total length of the baseline is 38.6 m. in the laboratory is maintained temperature about 20 ºc all year. in 2013, the heads of the pillars were equipped by centring plates (figure 3), which are arranged in line with the maximum lateral deviation of 2 mm. each centring plate is levelled in a horizontal plane (maximum slope 0.7%). heights of centring plates are not the same due to the different heights of pillars; their maximum difference is 50 mm. centring plates are turned in cylinder of hardened aluminium. plate diameter is 140 mm and in the middle of it is the clamping screw. the plates are bolted to the head of the pillars in 3 points using 40 mm screws with dowels. the centring plates are marked by lines to tighten the tribrachs always in the same position. figure 1: scheme of the baseline figure 2: photographs of the baseline 3. instruments and equipment leica absolute tracker at401 (figure 4) and two specially selected tribrachs of topcon, one carrier leica gzr3 with optical plummet and one spherical prism leica rrr1.5 in (figure 5) were used for determination of the size of the baseline. the leica absolute tracker at401 is primarily designed for high-precision engineering measurements. the measurement accuracy is characterized by standard deviations �� = �z = 0.15 mgon for horizontal directions and zenith angles and �d = 5 �m for distances. the maximum range of the distance measurement is 160 m. for the purpose of experimental measurements the absolute tracker has been borrowed from the research institute of geodesy, topography and cartography of czech republic (vúgtk). braun, j. et al: absolute baseline for testing of electronic distance meters geoinformatics fce ctu 12, 2014 30 figure 3: the head of pillar with centring plate figure 4: leica absolute tracker at401 figure 5: target set tribrach topcon, leica carrier gzr3, leica prism rrr1.5 braun, j. et al: absolute baseline for testing of electronic distance meters geoinformatics fce ctu 12, 2014 31 4. testing of the equipment it is assumed to use 2 tribrachs (1 for instrument and 1 for the target), 1 carrier and 1 prism for testing of the distance meters on baseline. to ensure high precision of the absolute distances determined between the pillars and the informative value of tests distance meters, it was necessary to perform a test measurement of the tools to determine their parameters so that subsequent measurements were not discarded unnecessarily because of damage. equipment of the facilities of the department of special geodesy at faculty of the czech technical university in prague was selected to determine the baseline. available was one precise carrier leica gzr3 for which the manufacturer specifies the accuracy of centring at 0.3 mm. it was also possible to use 5 topcon tribrachs, from which two best also with similar characteristics according to the test measurement were selected and one of the three available mini prisms leicagmp101 was chosen. all this equipment is used for the purposes of baseline measurements only. to verify the reliability and accuracy of tribrachs the tests of re-centring and re-attaching of the carrier and also a test of repeated rotation of the carrier in the tribrach were designed. there were also determined the addition constants of the mini prisms. all tests were performed with use of the tracker. appropriate carrier and tribrach together were always assembled for the test of the repeatability of centring. screwing and levelling at the centring plate by carrier’s bubble were done five times. there was maintained the same position and orientation of the tribrach screws and of the carrier. length measurement was performed after each screwing. between five tribrachs, the difference in length at the same pillar was up to 0.7 mm. sample standard deviation of repeatability of centring of individual tribrachs reached values of 0.002 mm – 0.007 mm. for the test of repeatability of placement of carrier (verification of the functionality of locks of the tribrach), there was always screwed a tribrach on a centring plate and accurately levelled by the carrier bubble. then the carrier was five times removed and again clamped during unchanged levelling and orientation of the carrier to the instrument. after each clamping length measurements were taken. standard deviation of repeatability reached values of 0.002 mm – 0.04 mm. on the basis of these tests 2 of the tribrachs for that the test of centring showed the same position were selected and they have also a minimal standard deviation of repeated clamping of carrier to tribrach. it was taped one adjustment screw on tribrachs so that when it was re-centred, it was always equally high. values determined in test measurements also reflect the degree of wear of tools and point to necessity of control of used equipment. for the test of the eccentricity of the carrier tribrach with a carrier was accurately levelled. the carrier was then rotated around its axis to the 9 position and rotation was performed twice (clockwise and vice versa). after each rotation the distance was measured. there was detected approximately circular eccentricity with radius of 0.03 mm. additive constants of the leica gmp101 mini prisms were determined by the comparison of the lengths measured at the spherical prism and leica mini prisms. lengths were measured on three pillars of baseline and to each pillar it was measured five times. for all tested prisms additive constant of approx. -16.5 mm was determined in comparison of -16.9 mm specified by the manufacturer. sample standard deviation of determined constants totals had reached values of 0.006 mm – 0.03 mm. designated additive constant of selected mini prism is used in the testing of instruments on the baseline. braun, j. et al: absolute baseline for testing of electronic distance meters geoinformatics fce ctu 12, 2014 32 5. determination of the dimensions of the baseline the aim of the measurements was to determine the horizontal distances between the pillars with accuracy better than 0.05 mm. measurements were taken from four standpoints and then determination of coordinates of points by the adjustment by method of least squares was made, all in july 2013. for the measurement were chosen 3 standpoints directly on the pillars of the baseline (first and last pillar 1 and 16 and the middle pillar 10) and one eccentric standpoint (1.5 m from the axis of baseline near pillar no. 6). from each standpoint it was measured in two faces at all the pillars of baseline. to guarantee the high accuracy required, it was necessary to comply the principles of minimizing of the effects of eccentricity of equipment. the tribrachs and centring plates are marked with signs and its coincidence ensures that the tribrach is screwed always in the same position. to maintain the orientation of the carrier leica gzr3 the eyepiece was always turned toward the first pillars. spherical leica prism was always rotated with leica sign up to preserve its orientation. 6. calculation for adjustment program gnu gama [6] was used, which allows in a simple xml file (extensible markup language) to define the input file for adjustment, and allows entering various standard deviations to the individual measurements. to calculate the local coordinates xy horizontal directions and horizontal distances were used, where the standard deviation of the directions 0.3 mgon was chosen and length of 0.025 mm, both with respect to the identified uncertainties of centring of targets. in adjustment was considered that coordinates of the forced centring with target will be identical to the coordinates of the centre of the instrument. the adjustment, however, showed that there is a certain eccentricity, the difference in the location of device and standpoint was approx. 0.2 mm (due to different used tribrach). therefore, the standpoints of the leica absolute tracker on the pillars of baseline were designated as separate (free) points. in the resulting adjustment were therefore determined plane coordinates of four standpoints and sixteen pillars. the 122 measurements were used in the calculation (the horizontal directions and distances), of which 81 were redundant. in adjustment was no measured value reported to be an outlier. mean standard deviation of the position of adjusted points reached 0.03 mm, the maximum positional standard deviation was 0.06 mm. a priori standard deviation was chosen 1.0 and a posteriori standard deviation after the adjustment was 0.77. size of a posteriori standard deviation indicates that the standard deviations of the measurements were achieved better than expected. from adjusted coordinates were calculated horizontal distances between the baseline points, their standard deviations were 0.02 mm (approximately valid for all of them). these distances are now used as a reference for testing of the distance meters. 7. conclusion in geodetic laboratory of the faculty of civil engineering ctu in prague was established 38.6 meters long edm baseline consisting of 16 concrete pillars with forced centring. lengths between the pillars were determined using an accurate instrument leica absolute tracker at401 with use of the least square adjustment method with a standard deviation of 0.02 mm. from time to time implementation of control is assumed verification measurements to determine the stability of the base and constancy of used equipment. the base is at present used for testing of distance meters instruments useful in indoor industrial measurements and for determining the size of their errors. braun, j. et al: absolute baseline for testing of electronic distance meters geoinformatics fce ctu 12, 2014 33 acknowledgements the article was written with support of the internal grant of czech technical university in prague sgs14 “optimization of acquisition and processing of 3d data for purpose of engineering surveying“. references [1] sn iso 17123-4, (2005) optics and optical instruments field procedures for testing geodetic and surveying instruments part 4: electro-optical distance meters (edm instruments)). prague: czech standards institute. 24 pp. [2] rüeger, j. m. (1990) electronic distance measurement. 3. ed. heidelberg: springerverlag, 266 pp. isbn 3-540-51523-2. [3] jokela, j., häkli, p. (2006). current research and development at the nummela standard baseline. in: shaping the change – xxiii fig congres, munich. 15 pp. available: https://www.fig.net/pub/fig2006/papers/ps05_02/ps05_02_02_jokela_hakli_0371.pdf [4] lechner, j., ervinka, l, umnov, i. (2008). geodetic surveying tasks for establishing a national long length standard baseline. in: integrating generations – fig working week 2008. stockholm. 9 pp. [5] de wulf, a., constales, d., meskens, j., nuttens, t., stal, c. (2011). procedure for analyzing geometrical characteristics of an edm calibration bench. in: bridging the gap between cultures fig working week 2011. marrakech. 14 pp. available: http://www.fig.net/pub/fig2011/papers/ts08e/ts08e_dewulf_constales_et_al_5266.pdf [6] epek, a. (2013). gnu gama 1.14 adjustment in geodetic networks. edition 1.14 [online]. available: http://www.gnu.org/software/gama/manual/index.html geoinformatics fce ctu 12, 2014 10 monitoring of bridge dynamics by radar interferometry imrich lipták, ján erdélyi, peter kyrinovi�, alojz kopá�ik slovak university of technology, faculty of civil engineering, radlinského 11, 81368 bratislava, slovakia, e-mail: imrich.liptak@stuba.sk abstract the paper presents the possibilities of radar interferometry in dynamic deformation monitoring of bridge structures. the technology is increasingly used for this purpose thanks to high accuracy of realized measurements and possibility to measure deformation at multiple places of the monitored structure. high frequency of realized measurements (up to 200 hz) enables to determine the most of significant vibration modes of bridge deformation. this technology is presented on real case study of the cycle bridge over the river morava near to bratislava (slovak republic). a spectral analysis of vibration frequencies is performed by discrete fourier transformation. the evaluation of correctness of the obtained deformation is performed by comparison of the results with accelerometer and total station measurements and fem (finite element method) model of the structure. key words: bridge, dynamic deformation, spectral analysis, ground-based radar 1. introduction bridge structures are usually exposed to the greatest extent by external influences such as weather conditions and loading by some objects. these factors have a significant influence on the behavior of the structure, which results in deformation of the whole structure or its parts. changes in a structure’s deformations typically have a cyclical behavior, which reflect the influences of the surroundings. therefore, the rate of the structure‘s stress and the magnitude of the impact of individual factors on the structure can be determined. this paper describes the methodology for processing the data of measurements by ground-based radar. it describes determining the relative displacement approach from a ground-based radar and the possibilities of determining the modal characteristics of any structural deformations. these possibilities are presented in a case study using a mathematical model and a case study involving pedestrian and cycling bridge measurements. 2. the technology of radar interferometry ground-based radar is an innovative measurement approach for the dynamic deformation monitoring of large structures such as bridges [2], [4] and [6]. radar measurements use the stepped frequency continuous wave (sf-cw) technique. this approach enables the detection of target displacements in a radar’s line of sight. the basic principle of the technique is the transmission of a set of sweeps, which consist of a number of electromagnetic waves at different frequencies. a pulse radar generates short-term duration pulses to obtain a range resolution, which is related to the pulse durations according to , 2 τ ∆ = c r (1) where c is the speed of light in a free space, and � is the time of the pulse’s flight. at each time interval of the measurements, the components of the received signals represent a frequency lipták, i. et al: monitoring of bridge dynamics by radar interferometry. geoinformatics fce ctu 12, 2014 11 response measured at the number of discrete frequencies. the application of an inverse fourier transformation (ift) frequency response is transformed to a time domain. the system then builds a one-dimensional image a range profile, where the reflectors are resolved with a range resolution according to their distance from the radar (figure 1 left). figure 1: radar range resolution principle (left) and radial displacement and projected displacement (right) when the range profile is generated, the displacements of the targets are detected by the differential interferometry technique. this approach compares the phase delay of the emitted and reflected microwaves. radial displacement is therefore linked with the phase delay �� by the following , 4 λ ϕ π = ∆ p d (2) where � is the wavelength of the signal. radial displacements of the targets can be transformed into vertical displacements according to figure 1 (right). 3. case study the practical use of the ground-based radar, and spectral methods of data processing described in the previous chapter were realized on an actual bridge structure. bridge structures for pedestrians are usually designed as flexible structures with higher values of deformation amplitudes than road and railway bridges. experimental measurements were realized on a pedestrian and cycling bridge over the river morava from devínska nová ves (slovakia) to schlosshof (austria) ([1]). the known modal characteristics of the structure’s dynamic deformations are useful for verification of the measured deformations. cycling bridge is constructed on a cycling route between an urban area of bratislava – devínska nová ves and the austrian village of schlosshof (figure 3). the bridge is a steel truss structure consisting of three parts with a full length of 525.0 m. the main steel structure, which is the subject of the dynamic deformation monitoring, is a symmetrical cable-stayed steel structure (by loading rods, which are anchored at the tops of four steel pylons) with three sections of lengths of 30.00 + 120.0 + 30.0 = 180.0 m ([1]). antena direction target range resolution area r a d a r range [m] e ch o in te ns it y range bin [m] lipták, i. et al: monitoring of bridge dynamics by radar interferometry. geoinformatics fce ctu 12, 2014 12 figure 2: cycling-bridge in devínska nová ves 3.1 dynamic deformation measurements higher wind intensity in the surrounding of the bridge, as well as pedestrian movements, have a significant influence on any structural deformations. these deformations are characterized by the vertical bending and horizontal torsional oscillation of the bridge deck. a ground-based radar ids ibis-s measures dynamic displacements by comparing the phase shifts of reflected radar waves collected at the same time intervals. displacement is measured in a radial direction (line of sight). the minimal radial range resolution of the radar is 0.5 m. the accuracy of the measured displacements is at a level of 0.01 mm, but it depends on the range and quality of the reflected signal [2]. the measurements and data registration are managed by the ibis-s operational software installed in a notebook. the measurements were realized during different types of loading of the structure walking, running and jumping of one person (80 kg weight) at the structure. the types of loading used were designed based on the fem (finite element method) model of the structure ([3]). the frequency of the data registration by the ground-based radar was realized on the level of 100 hz due to the requirements to achieve a higher accuracy of the relative displacements and the occurrence of significant frequencies of structural deformations, which were higher than 10 hz. the verification of the measured deformation was performed by accelerometer hbm b12/200 and total station leica ts30 ([5]). figure 3: components of the measurement system 3.2 data processing and analysis raw displacement data are measured in a radial direction (direction in the line of sight). for our purposes we needed to obtain vertical deformations. the first step in the data processing of the radar measurements is therefore setting the geometry of the structure and defining the position of the radar and measuring points. we can define measured points manually using corner reflectors lipták, i. et al: monitoring of bridge dynamics by radar interferometry. geoinformatics fce ctu 12, 2014 13 or by finding parts on the structure which have acceptable reflection parameters. in our case the second option was chosen. a good reflection of the signal from the structure defines the range bin profile (figure 4). the selected peak corresponds to the position of the accelerometer and the prism stabilized on the centre of the structure. the structure is constructed from transversal beams, what are exploited as natural reflectors. figure 4 visualizes the estimated signal to ratio (snr) of the signal’s reflection, depending on the structure’s range. the snr is presented in linear units. a signal with a higher snr above the threshold (green line) has a better quality, and there is an assumption of a higher degree of accuracy in determining the displacements. figure 4: range bin profiles table 1 shows the projected vertical displacements (way of projection is described by figure 1 right) of the structure at the centre of the bridge. it can be seen, that time series of the displacements measured by accelerometers have a similar behaviour to displacements measured by radar. the effect of pedestrians walking has a minimum influence on the vertical displacements. the rapid movement of pedestrians affects the maximum vertical displacements two times more than during the loading by a pedestrian’s walking. the jumping of pedestrian affects the displacements up to 4.85 mm at the centre of the structure. figure 5: displacements (left) and periodogram (right) during jumping–centre of the structure lipták, i. et al: monitoring of bridge dynamics by radar interferometry. geoinformatics fce ctu 12, 2014 14 table 1 maximum displacements at the centre of the structure (in millimetres) sensor no load walking running jumping accelerometer 0.69 0.73 0.72 4.82 ground-based radar 0.71 0.74 0.76 4.63 total station 0.71 1.03 0.73 2.85 determining the natural frequencies of the structural deformations at the centre of the bridge was realized by an auto-spectral analysis using the fast fourier transformation (fft) algorithm. table 2 shows dominant frequencies of the deformations at the centre of the structure determined by each used type of sensors during different types of loading. table 2: dominant frequencies of the deformations at the centre of the structure vibration mode natural frequency [hz] measurement no load walking running jumping 2 1.63 1.53(2) 1.46(3) 1.53(2) 1.75(3) 22 2.15 2.05(1) 2.01(2) 1.80(1) 1.82(2) 1.87(3) 23 2.49 2.90(1) 2.97(2) 44 3.76 3.99(1) 3.71(2) 3.51(2) 59 4.69 4.59(1) 4.56(2) >5.00 9.10(1) 9.11(2) 9.07(2) (1) accelerometer, (2) ground-based radar, (3) total station the fem model contains 5 vibration modes of deformation, which have a significant influence on the structure’s dynamic deformation. vibration modes are defined for specific for specific type of loading. table 2 shows the significant influence of pedestrians on the dynamic response of the monitored structure. the dominant frequencies of the deformations have a good rate of compliance with the 22nd vibration mode of the structure which has a dominant influence on the structure’s stability. the estimated frequencies approximately at the levels 1.80 hz and 2.10 hz level are affected by the walking and jumping of pedestrian, because the frequency of a pedestrian’s steps and jumping has similar values. this assumption of dominant influence of structure stability was confirmed during the experiment by the low stability of the pedestrian moving on the structure. the most dominant frequency estimated in the time series higher than 5 hz was at the level of 9.10 hz. this frequencies have minimum amplitudes with low influence on stability of the structure. differences can be affected by accuracy of fem model and by fact that dampers on the rods are not included in the fem model calculation. the accuracy of the estimated frequencies in each time series is under the level of 1 %. 4. conclusion the paper presents possibilities for an analysis of structural dynamic deformations measured by ground-based radar, accelerometer and total station. a practical application of the processing and lipták, i. et al: monitoring of bridge dynamics by radar interferometry. geoinformatics fce ctu 12, 2014 15 analysis of these data was realized at the cycling bridge at devínska nová ves. the results of the experimental measurements correspond with the fem model of the main steel structure and can be used for the calibration of this model. the realized measurements show a good application of ground-based radar technology. it is a perspective non-contact technology, which is able to monitor entire structure’s deformation synchronously with high frequency. the accuracy of measured deformation is depended on system configuration and quality of reflected signal. this can be improved by corner reflectors, especially when the structure surface is low reflective. the results can significantly contribute to the prediction of possible failures of the structure, which can be reflected by anticipated temporal changes in the modal frequencies at the measured points of the structure. these failures can be investigated by other surveying methods such as terrestrial laser scanning. acknowledgement this work was supported by the slovak research and development agency under the contract no. apvv-0236-12 references [1] agocs, z. vanko, m. (2011): devínska nová ves schlosshof cycling-bridge. project design report. bratislava : ingsteel, a.s., 25 pp. 2011, (in slovak). [2] bernardini, g. pasquale de, g. – bicci, a. – marra, m. – coppi, f. – ricci, p. (2007): microwave interferometer for ambient vibration measurements on civil engineering structures: 1. principles of the radar technique and laboratory tests. evaces ’07, 2007. [3] excon, ltd. (2010): evaluation of the dynamic fem model of the devínska nová ves – schlosshof cycling-bridge. 42 pp. (in slovak). [4] gentile c. (2009): radar-based measurement of deflections on bridges and large structures: advantages, limitations and possible applications. iv eccomas thematic conference on smart structures and materials (smart’09), 2009, pp. 1–20. [5] kopá ik, a. lipták, i. kyrinovi , p. erdélyi, j. (2013): monitoring of a cycling bridge using accelerometers and ground-based radar a case study. in 2nd joint international symposium on deformation monitoring : nottingham, 9.-11.9.2013. nottingham : university of nottingham, 2013. 8 p. [6] wenzel, h. (2009): health monitoring of bridges. john wiley & sons, ltd. 2009. 643 pp. isbn 978-0-470-03173-5. geoinformatics fce ctu 12, 2014 41 denoising of laser scanning data using wavelet petr jašek, martin štroner czech technical university in prague, faculty of civil engineering, department of special geodesy, thákurova 7, 16629 praha 6, czech republic, e-mail: petr.jasek@fsv.cvut.cz, martin.stroner@fsv.cvut.cz abstract regarding the terrestrial laser scanning accuracy, one of the main problems is the noise in measured distance which is necessary for the spatial coordinates´ determination. in this paper the technique of using the wavelet transformation for the reduction of the noise in the laser scanning data is described. this method of filtration is made in “post processing” and due to this fact any changes in the measuring procedure in the field shouldn´t be done. the creation of the regular matrix is needed to apply image processing. this matrix then makes the range image. in the paper real and simulated efficiency tests of wavelet transformation, the final summary and advantages or disadvantages of this method are introduced. key words: laser scanning, noise reduction, wavelet transform, points 1. introduction 3d terrestrial scanning based on the spatial polar method (frequently also called laser scanning) is a surveying method of the determination of spatial coordinates of points, usually within one scan in a regular angular raster. the horizontal direction, the zenith angle and the slope length are always measured. the coordinates are identified in a local system defined by the start, i.e. the reference point of the scanner and the basic (zero) horizontal direction identifying the direction of the positive x axis. numerous scanning systems work at any arbitrary screen angle, therefore, the actual xy axis plane is not necessarily horizontal. a significant characteristic of the scanning process is a high measurement velocity and also non-selectivity of point sampling. the measurement accuracy may be characterised by standard deviations of the measurement of the above-mentioned quantities (for more facts about 3d terrestrial scanning see [1]).in commonly performed surveying measurements requiring higher accuracy than the accuracy corresponding to one single measurement, the number of repeated measurements may commonly be increased to obtain more accurate results by their arithmetic mean, or the same may also be achieved by the measurement of redundant quantities, and the accuracy is then improved by adjustment. in 3d scanning, on the contrary, this is normally not possible (with the exception of some instruments by the trimble company, e.g. the gs 200 type [2]; or by using the scanaverager programme [3] for repeatedly scanned data from special selected instruments). the accuracy of measured lengths is usually one of the factors limiting the achievable accuracy of measurement results in multipurpose 3d scanners, in continuous surfaces, however, this accuracy may be improved by eliminating the noise (random errors) based on the properties of neighbouring points. 2. description of wavelet transform processing algorithm wavelet transform is an integral transformation that allows obtaining time-frequency description of the signal. it is possible to use it for data decorrelation, decomposition of the signal into independent building blocks. jašek, p. – štroner, m.: denoising of laser scanning data using wavelet geoinformatics fce ctu 12, 2014 42 0,, 1 )(, ≠∈� � � � � � − = srs s t s t s τ τ ψψ τ , (1) where ψ is the mother wavelet, s is scale changing the width of the mother wavelet (dilatation), τ is location parameter, changing the location of the wavelet in the timeline (translation). the whole mathematical principle and precise description can be found in [4]. for denoising is used discrete wavelet transform (dwt). for the discrete wavelet transformation is most appropriate to use the dyadic sampling, where an orthonormal basis is created with use of suitable wavelet ψ. zkpks pp ∈== ,2,2 τ (2) and therefore �� � � �� � � − = p p p pk kt t 2 2 2 1 )(, ψψ , (3) where p is scale, k is location. chosen wavelet allows nonredundant signal decomposition, socalled multiresolution analysis, due to the orthogonality. 3. preparation of data before denoising for our application discrete wavelet transform was used. the main idea of application of wavelet transform used for elimination of distance noise in scan is to transfer obtained coordinates x, y, z to original polar coordinates �, �, d. for application of image processing it is necessary to create a regular matrix from the data that will show range image. every measured distance have precisely determined position in the matrix and describes value of pixel in image. )],();,();,([)],();,();,([),( mndmnmnmnzmnymnxmnp ξϕ�= (4) figure 1: create of range image application of the wavelet transform was done after creating the range image in software matlab r2011b. parent family of orthonormal wavelets daubechies [5] were used. use of daubechies wave was based on previous testing. jašek, p. – štroner, m.: denoising of laser scanning data using wavelet geoinformatics fce ctu 12, 2014 43 4. the reference method denoising by near points surface fitting the method is based on surface fitting by plane and polynomial surfaces of 2nd, 3rd and 4th degree. the processing involves a gradual choice of a selected number of the nearest points for each point of a scan, a selected surface is interleaved with them and by the elongation or shortening of a beam with a given horizontal direction and the zenith angle onto the intersection with the surface a new (smoothed) position of the points is obtained. in spite of being always obtained in a certain order during the measurement, scanning data do not preserve this arrangement after their export, therefore, the procedure chosen for the searching of the neighborhood of a point in a large point cloud (hundreds of thousands to millions of points) was the conversion of the problem of searching the neighborhood in space (3d) into searching on a plane (2d); to this end, an algorithm was designed which is based on the application of coordinates recalculated into slope lengths, horizontal directions and zenith angles in the local coordinate system of the scanner. to make this procedure usable, untransformed data must be smoothed, to acquire a regular spaced matrix of points. a full description was given in [6]. 5. testing the application of smoothing (denoising) methods in addition to real scan a virtual flawless scans of spherical and plane surfaces were created for testing of new methods of denoising. these scans were decomposed into components �, �, d. the distance component was modified by artificial noise. the noise had a normal distribution with standard deviation 2 mm. re-scanned coordinates were calculated from lengths including the noise and then were denoised using above described methods. two criteria were selected for evaluating the success of these methods: using a standard deviation of the differential model against flawless source using a standard deviation of fit by sphere/plane number of the triangles of the tin model 5.1 virtual scan sphere figure 2: differentials models a) scan with noise, b) denoised wavelet db6, c) second order chebyshev lsm, d) third order chebyshev lsm, e) fourth order chebyshev lsm, f) plane – lsm jašek, p. – štroner, m.: denoising of laser scanning data using wavelet geoinformatics fce ctu 12, 2014 44 table 1: achieved accuracy virtual scan of sphere �� standard deviation of the fit [mm] number of triangles std. deviation of the model [mm] virtual scan 0,0 7 003 scan with noise 1,4 5 061 1,2 wavelet db6 0,4 7 283 0,4 second order chebyshev 0,6 7 247 0,5 third order chebyshev 0,6 7 204 0,5 fourth order chebyshev 0,9 6 624 0,7 plane 0,3 7 321 0,3 5.2 virtual scan – plane figure 3: differentials models a) scan with noise, b) denoised wavelet db6, c) second order chebyshev lsm, d) third order chebyshev lsm, e) fourth order chebyshev lsm, f) plane – lsm table 2: achieved accuracy virtual scan of plane standard deviation of the fit [mm] number of triangles std. deviation of the model [mm] virtual scan 0,0 18 049 scan with noise 1,9 14 050 1,6 wavelet db6 0,5 17 906 0,5 second order chebyshev 0,8 17 817 0,7 third order chebyshev 0,8 17 725 0,7 fourth order chebyshev 1,3 17 274 1,0 plane 0,4 17 927 0,4 5.3 real scan quality of denoising of an object, which was cut out from sculpture of david, was assessed according to the number of created triangles and the visual quality of the resulting triangular jašek, p. – štroner, m.: denoising of laser scanning data using wavelet geoinformatics fce ctu 12, 2014 45 model. improvement in the quality of triangular network is visible after the methods of denoising are applied. real scan was obtained by measuring the 3d scanner leica hds 3000. the best results were obtained by using wavelet transform and second order of chebyshev´s polynomial. when higher orders or number of iterations of wavelet transform were applied, the shape of real object was repressed. when the method of denoising by fitting the plane was used, the original object got degraded. figure 4: triangular networks a) scanned object b) real scan, c) averaged scan from 5 scans, d) denoised wavelet db6, e) second order chebyshev lsm, f) third order chebyshev lsm, g) fourth order chebyshev lsm, h) plane – lsm table 3: achieved accuracy real scan of sculpture (number of triangles of network) real scan wavelet db6 second order chebyshev third order chebyshev fourth order chebyshev plane 15609 18958 19009 18922 18525 19066 jašek, p. – štroner, m.: denoising of laser scanning data using wavelet geoinformatics fce ctu 12, 2014 46 6. influence of the scanning resolution and size of the standard deviation of the distance on the success rate of the denoising to test of the influence of the scanning resolution and of the standard deviation of the distance there was created four virtual point clouds of the sphere (diameter 0.6 m) with resolution of the 2 mm x 2 mm, 4 mm x 4 mm, 6 mm x 6 mm and 10 mm x 10 mm. xyz coordinates were decomposed to �, �, d the same way as in previous case. distance d was then contaminated by the normal distribution noise with standard deviation 2 mm and 4 mm. as can be seen in figure 5, resolution of the scan do not influences the success rate of the denoising too much. quite more important parameter is the standard deviation of the distance. there can be also seen, that the best results are being achieved by using of the wavelet transform denoising method and by the plane fitting denoising method. figure 5: graph of standard deviation of the fit sphere – noise 2 mm / 4 mm 7. conclusion testing of proposed noise filtering methods, in the measured distance, were performed on three surfaces (sphere, plane and continuous part from sculpture of david). the first two testing surfaces were created virtually as flawless surfaces. then the artificial noise was added. improvement in all of the proposed solutions of denoising is visible from obtained results. it can be concluded that the best results mathematic surfaces were achieved by denoising using wavelet transform and a fitting plane. for continuous structures very similar results are achieved by using wavelet transform and fitting by the second order of chebyshev´s polynomials. it is clearly evident that the benefits of new filtration process led to the suppression of the noise of measured distance. therefore the resulting models of scanned objects can be more accurate and credible. acknowledgements the article was written with support of the internal grant of czech technical university in prague sgs14 “optimization of acquisition and processing of 3d data for purpose of engineering surveying“. references [1] štroner, martin and pospíšil, ji�í. terestrické skenovací systémy: vyd. 1. praha: eské vysoké u�ení technické v praze, 2008, 353 s., isbn 978-80-01-04141-3. [2] trimble,“trimble gs series 3d scanner,” trimble, 2005, (5 march 2013). jašek, p. – štroner, m.: denoising of laser scanning data using wavelet geoinformatics fce ctu 12, 2014 47 [3] štroner, martin., “projekt scanaverager v2.4.1,” dpt. of special geodesy, 2012, http://k154.fsv.cvut.cz/~stroner/scanaverager_v2/index.html (5 march 2013). [4] addison, paul s. the illustrated wavelet transform handbook: introductory theory and applications in science, engineering, medicine and finance. new york: taylor, c2002. isbn 07-503-0692-0. [5] daubechies, ingrid. ten lectures on wavelets. philadelphia, pennsylvania : society for industrial and applied mathematics, 1992. xix, 357 s. (cbms-nsf regional conference series in applied mathematics; sv. 61) isbn 0898712742 [6] smítka, václav and štroner, martin. 3d scanner point cloud denoising by near points surface fitting. in: videometrics, range imaging and applications xii; and automated visual inspection. bellingham (state washington): spie, 2013, vol. 1. isbn 978-0-8194-9607-2. správa výukových kurzů v systému moodle správa výukových kurz̊u v systému moodle petr soukup department of mapping and cartography faculty of civil engineering, ctu in prague e-mail: soukup@fsv.cvut.cz úvod s výukovým procesem je spojena řada nezbytných administrativńıch úkon̊u, které jsou značně časově náročné a zab́ıraj́ı omezenou kapacitu vyučuj́ıćıch. patř́ı sem činnosti jako evidence studijńıch aktivit student̊u, zadáváńı a hodnoceńı úkol̊u, zadáváńı a hodnoceńı test̊u znalost́ı student̊u, elektronická komunikace se studenty, atd. současně s rozvojem informačńıch technologíı se prohlubuje trend poskytnout student̊um pro studium větš́ı volnost v čase i prostoru. se všemi těmito aspekty moderńı výuky se snaž́ı pomoci programové nástroje označované často zkratkou cms (course management system). e-learning a čvut čvut sleduje a vyv́ıj́ı aktivity v oblasti e-learningu již deľśı dobu [navrátil 2006], ale zat́ım podle mého názoru poněkud hledá jeho optimálńı využit́ı ve výuce. počátky konkrétńıch řešeńı spadaj́ı zhruba do obdob́ı přelomu tiśıcilet́ı a jsou spojeny se systémem webct. bohužel z d̊uvodu rostoućıch licenčńıch poplatk̊u bylo záhy od využ́ıváńı tohoto systému upuštěno s t́ım, že jako jeho nástupce byl vybrán systém classserver firmy microsoft. pro zabezpečeńı náročných a rozsáhlých potřeb výuky na vysokých školách je však v současné době tento systém podle mého názoru málo vhodný. přes nákladné snahy o jeho integrováńı do fakultńıho informačńıho systému nedoznal podle mých informaćı větš́ıho praktického rozš́ı̌reńı. jako jedna z perspektivńıch variant daľśıho směřováńı v oblasti e-learningu se jev́ı možnost použ́ıvat pro správu výukových kurz̊u systémy založené na gnu licenci. jedńım z nejznáměǰśıch představitel̊u této kategorie je systém moodle. v současné době vznikaj́ı na čvut prvńı výukové kurzy provozované v systému moodle. některé z nich jsou dostupné na portálu čvut, věnovaném studijńım podklad̊um př́ıstupným po internetu: http://ocw.cvut.cz/moodle/. moodle – charakteristika moodle je programový systém spadaj́ıćı do kategorie cms. jako takový jednak integruje do jednotného prostřed́ı rozmanité nástroje a formy použ́ıvané při výuce a jednak podporuje administrativńı zabezpečeńı výuky. moodle je softwarový baĺık určený pro podporu prezenčńı i distančńı výuky prostřednictv́ım online kurz̊u dostupných na www. moodle je software volně šǐritelný na základě gnu licence s otevřeným php kódem. běž́ı na každém operačńım systému, který podporuje php (unix, linux, windows, mac os x, netware). všechna data jsou ukládána v jediné databázi. geinformatics fce ctu 2006 164 http://ocw.cvut.cz/moodle/ správa výukových kurzů v systému moodle největš́ı podpora je poskytována databáźım mysql a postgresql, nicméně lze použ́ıt i jiné databáze (oracle, access, interbase, odbc atd). daľśı informace lze źıskat na webových stránkách http://moodle.cz/ nebo http://moodle.org , kde se lze také doč́ıst o p̊uvodu slova moodle: slovo moodle bylo p̊uvodně akronymem pro modular object-oriented dynamic learning environment (modulárńı objektově orientované dynamické prostřed́ı pro výuku). lze ho také považovat za sloveso, které popisuje proces ĺıného bloumáńı od jednoho k druhému, děláńı věćı podle svého, hravost, která často vede k pochopeńı problému a podporuje tvořivost. v tomto smyslu se vztahuje jak k samotnému zrodu moodlu, tak k př́ıstupu studenta či učitele k výuce v on-line kurzech. systém moodle se úspěšně prosazuje na řadě vysokých školách v čr. na karlově univerzitě se systém moodle systematicky využ́ıvá již několik let a v současné době obsluhuje několik deśıtek výukových kurz̊u (http://div.cuni.cz/). také ve světě se zač́ıná moodle výrazně prosazovat. britská open university (http://www.open.ac.uk/) se rozhodla vybudovat rozsáhlý systém kurz̊u s využit́ım systému moodle. inovace vyvinuté v rámci projektu budou dostupné celé komunitě uživatel̊u tohoto systému. moodle – možnosti v daľśım textu je uveden přehled základńıch vlastnost́ı systému moodle. role uživatel̊u systém rozlǐsuje tyto uživatele (typy kont): � administrátor – je určen během instalace, spravuje systém jako celek, určuje tv̊urce kurz̊u, � tv̊urce kurzu – zakládá kurzy, určuje učitele pro kurzy, � učitel – edituje kurz a ř́ıd́ı jeho výuku, zapisuje a odhlašuje studenty z kurzu, � student – studuje kurz, spravuje sv̊uj osobńı profil, � host – student, který může nahlédnout do kurzu, ale nemá možnost zasahovat aktivńım zp̊usobem do jeho chodu. účastńıci výukového kurzu maj́ı v systému moodle vytvořený účet. účet si mohou studenti vytvářet sami (je-li to povoleno) nebo jim může (př́ıpadně muśı) účet vytvořit učitel a to bud’ dávkovým zp̊usobem celé skupině student̊u najednou či interaktivně jednotlivc̊um. účet je chráněný heslem, které se zadává a ověřuje při každém přihlašováńı do kurzu. ověřeńı hesla lze provádět několika mechanismy (v̊uči lokálně uloženému heslu, proti server̊um ldap, imap, pop3, nntp, př́ıpadně v̊uči libovolné exterńı databázi – je implementována podpora certifikát̊u ssl a tls). učitel může pro každý kurz stanovit “kĺıč k zápisu”, aby do něj měli př́ıstup pouze oprávněńı studenti. tento kĺıč sděĺı student̊um (osobně, soukromým e-mailem apod.) a ti jej zadaj́ı při geinformatics fce ctu 2006 165 http://moodle.cz/ http://moodle.org http://div.cuni.cz/ http://www.open.ac.uk/ správa výukových kurzů v systému moodle prvńım přihlášeńı do kurzu. uspořádáńı kurzu činnost výukového kurzu je poskládána z jednotlivých modul̊u. základńı moduly jsou: studijńı materiály – lze využ́ıt jakéhokoli materiálu dostupného v elektronické formě (texty, prezentace, flash, video nebo zvukové soubory ap.). materiály lze nahrát na server mooodlu nebo je odkazovat jako exterńı zdroje na internetu. v kurzu lze pracovat s webovými aplikacemi a předávat jim data. úkoly lze stanovit termı́n odevzdáńı a maximálńı počet bod̊u. výsledkem úkolu může být soubor v libovolném formátu (do zadané velikosti), který studenti ulož́ı na serveru. odevzdaný soubor je opatřen časem odevzdáńı. opožděné odevzdáńı lze umožnit, je však jasně zvýrazněno př́ıpadné opožděné odevzdáńı. lze navolit, zda již ohodnocenou úlohu lze znovu odevzdat – opravit. informace o ohodnoceńı úlohy je studentovi automaticky odeslána emailem. fórum – podporuje komunikaci mezi účastńıky kurzu. jsou k dispozici r̊uzné typy fór, např. učitelské, aktuálńı zprávy z kurzu, veřejné fórum nebo fórum umožňuj́ıćı každému uživateli založit pouze jedno téma diskuse. nové př́ıspěvky mohou být automaticky roześılány emailem. test – pro ověřováńı znalost́ı student̊u. existuje několik variant test̊u. otázky a odpovědi lze náhodně mı́chat, lze navolit časové intervaly, kdy lze test spustit, volitelně lze zobrazovat správné odpovědi. testy mohou obsahovat řadu typ̊u otázek (výběr jedné nebo v́ıce odpověd́ı, krátké tvořené odpovědi, přǐrazovaćı otázky, numerické úlohy včetně tolerance, otázky typu ano/ne). mezi daľśı užitečné moduly patř́ı např. moduly chat, dotazńık nebo workshop. výukový kurz může být uspořádán týdenńım, tématickým nebo diskusńım zp̊usobem. nástroje kurzu poskytuj́ı rozsáhlé možnosti sledováńı a zaznamenáváńı činnosti uživatel̊u. obsah kurzu lze zálohovat a přenášet na jiné servery se systémem moodle. moodle a igs na katedře mapováńı a kartografie se zabývám výukou předmětu interaktivńı grafické systémy (igs) [soukup, žofka, 2005]. předmět je rozložen do dvou semestr̊u. jeho hlavńım ćılem je naučit studenty základ̊um práce s grafickými editory, které budou dále využ́ıvat v navazuj́ıćıch odborných předmětech. v prvńım semestru se studenti seznámı́ s obecným grafickým systémem microstation, druhý semestr je věnován specifičtěji zaměřenému systému kokeš. oba předměty jsou doplněny navazuj́ıćımi volitelnými předměty, které prohlubuj́ı źıskané základńı znalosti. posledńı semestr jsme výuku jednoho z těchto předmět̊u realizovali pomoćı systému moodle. výuky se zúčastnilo cca 40 student̊u. zkušenosti s využit́ım systému jsou vesměs pozitivńı a proto od př́ı̌st́ıch semestr̊u poč́ıtáme s jeho nasazeńım v rámci obou základńıch předmět̊um igs1 i igs2 (každý z nich navštěvuje přibližně 100 student̊u). geinformatics fce ctu 2006 166 správa výukových kurzů v systému moodle při hodnoceńı našich praktických zkušenost́ı se systémem moodle se zaměř́ım na několik aspekt̊u, které mohou mı́t obecněǰśı charakter. studentská konta pro aktivńı práci ve výukovém kurzu muśı mı́t student v systému moodle založené osobńı konto (účet). jednoduchá generace studentských kont je však nezbytná podmı́nka pro větš́ı využit́ı systému moodle ve výuce. pokud maj́ı názvy kont odpov́ıdat jmén̊um student̊u, je potřeba konta předem připravit. všechny informace potřebné pro zakládáńı kont jsou přitom již evidovány ve fakultńım informačńım systému kos (komponenta studium) a nemuśı se tedy znovu zjǐst’ovat či digitalizovat. v současné době však neńı ještě zcela zautomatizován proces využit́ı těchto informaćı pro tvorbu kont. existuje sice převodńı program kosmood [duben 2006], ale vstupńı data, která se generuj́ı z databáze systému kos, se nám nepodařilo zajistit. použili jsme proto dávkový zp̊usob zakládáńı kont, který je integrovaný v systému moodle, a při kterém se všechny potřebné údaje přeb́ıraj́ı z připraveného textového souboru v zadaném formátu. na náš požadavek byl tento typ výstupu doplněn do systému kos. v současné době je potřeba ještě provést jistou ručńı úpravu tohoto výstupu a následně již lze nechat vygenerovat př́ıslušná studentská konta. tento postup může realizovat každý učitel kurzu systému moodle, který má př́ıstup do databáze kos. je zřejmé, že při větš́ım nasazeńı systému moodle ve výuce by bylo potřeba tvorbu studentských kont a zařazováńı student̊u do kurz̊u ještě v́ıce usnadnit. studijńı materiály studijńı materiály tvoř́ı základ každého výukového kurzu. v systému moodle mohou být v textovém formátu, ve formátu html nebo mohou být dostupné ve formě webového odkazu. studijńı materiály v kurzu igs2 jsme vytvořili ve formátu typu wiki, z něhož jsou soubory automaticky převáděny do formátu html. tvorba takových dokument̊u je jednodušš́ı a výsledné texty jsou unifikovaněǰśı. jistou nevýhodou je, že ve světě internetu existuje v́ıce formát̊u typu wiki navzájem se lǐśıćıch formátovaćımi možnostmi a použitými značkami. filosofie wiki dokument̊u povoluje jejich veřejnou editaci. tento princip podporuje volitelně i wiki systému moodle. stávaj́ıćı studijńı materiály kurzu igs2 považujeme za poněkud strohé a chystáme jejich oživeńı. vedle běžných prvk̊u jako obrázky nebo ikony uvažujeme o vytvořeńı sady ozvučených animaćı, které by názorně zachycovaly obt́ıžněǰśı etapy práce s grafickým systémem. studium by tak źıskalo pro studenty poutavěǰśı formu, bylo by efektivněǰśı a byla by tak podpořena i určitá forma distančńı výuky. oprava úkol̊u během semestru studenti pr̊uběžně vypracovávaj́ı jednotlivé úkoly a odevzdávaj́ı je dohodnutým zp̊usobem ve formě soubor̊u. v př́ıpadě předmětu igs1 i igs2 se jedná o kontrolu zhruba 100 soubor̊u každý týden. kontrola patř́ı k časově nejnáročněǰśım a nejméně obĺıbeným činnostem spojeným s výukou. jakákoli pomoc v této oblasti je proto učiteli velmi v́ıtána. geinformatics fce ctu 2006 167 správa výukových kurzů v systému moodle systém moodle nab́ıźı některé nástroje, které mohou opravu odevzdaných úkol̊u zpřehlednit a usnadnit. ve výuce kurzu igs2 jsme pro opravy úkol̊u použili následuj́ıćı pravidla. pro odevzdáńı každého úkolu byl stanoven termı́n a bylo sledováno a př́ıpadně bodově penalizováno jeho nedodržeńı. studenti mohli úkol před vypršeńım termı́nu opakovaně opravovat do té doby, dokud nebyl zkontrolován a ohodnocen vyučuj́ıćım. poté již úkol nebylo možné opravit. hodnoceńı úkol̊u mohl vyučuj́ıćı doplnit i slovńım zd̊uvodněńım. po ohodnoceńı úoklu učitelem byl automaticky př́ıslušnému studentovi odeslán o této skutečnosti e-mail s odkazem na stránku s hodnoceńım. vedleǰśı výhodou tohoto zp̊usobu odevzdáváńı a hodnoceńı úkol̊u je, že studenti maj́ı př́ıstup k výsledk̊um a hodnoceńı pouze svých vlastńıch výtvor̊u (nikoli k výsledk̊um spolužák̊u). podle našich zkušenost́ı nám systém moodle velmi usnadnil opravy studentských úkol̊u. domńıvám se, že u předmět̊u, ve kterých studenti často odevzdávaj́ı výsledky zadaných úkol̊u v digitálńı podobě, jsou možnosti systému moodle v této oblasti jedńım z hlavńıch argument̊u pro jeho nasazeńı ve výuce. testováńı student̊u testováńı znalost́ı student̊u tvoř́ı d̊uležitou součást každé výuky. u poč́ıtačově podporovaných kurz̊u to plat́ı dvojnásob [soukup 2004]. systém moodle umožňuje pracovat s mnoha typy test̊u, jejichž seznam lze dále rozšǐrovat. na našem pracovǐsti jsme v minulých letech vyvinuli vlastńı programový systém na testováńı student̊u. systém úspěšně použ́ıváme a pr̊uběžně zdokonalujeme již několik let. plánujeme proto implementovat tento testovaćı modul do prostřed́ı systému moodlu tak, aby se stal jeho integrálńı součást́ı, tj. aby např. výsledky testu byly zahrnuty mezi ostatńı hodnoceńı studenta. závěr domńıvám se, že cms nástroje maj́ı budoucnost a mohou přispět k zefektivněńı výuky. moodle je svoj́ı koncepćı vhodný kandidát pro nasazeńı v akademické sféře. jeho už́ıváńı neńı spojeno s licenčńımi poplatky a dostupnost zdrojových kód̊u umožňuje vlastńı vývoj jeho daľśıch rozš́ı̌reńı. v současné době jako jistou komplikaci při jeho větš́ım nasazeńı v našich podmı́nkách vid́ım jeho nedokonalou provázanost s fakultńım informačńım systémem kos, což komplikuje automatizaci zakládáńı studentských kont. uvedený nedostatek by ale mělo j́ıt vcelku snadno odstranit a pak by mohl systém moodle nalézt zaj́ımavé uplatněńı ve výuce řady předmět̊u. ohledně výuky předmětu interaktivńı grafické systémy uvažujeme o rozš́ı̌reńı a oživeńı výukových text̊u o multimediálńı prvky. obt́ıžněǰśı postupy práce s grafickými systémy bychom rádi zpracovali do podoby animovaných sekvenćı doprovázených mluveným komentářem. geinformatics fce ctu 2006 168 správa výukových kurzů v systému moodle literatura 1. soukup, p. (2004): e-learning and checking of study effectiveness by testing. in: proceedings of conference iceta 2004: information and telecommunications technologies in education, košice, slovak republic, 16.9. 18.9.2004, p. 371-375, isbn 80-89066-85-2 2. soukup, p., žofka, p. (2005): výuka interaktivńıch grafických systémů na oboru geodézie a kartografie stavebńı fakulty čvut praha. in: sborńı referát̊u (cd rom): 16. kartografická konference mapa v informačńı společnosti, brno, 7.9. 9.9.2005, str. 209-213. isbn 80-7231-015-1 3. duben, s.(2006): automatizace přihlašováńı do moodle. in: sborńık konference belcom 06 spolupráce univerzit při efektivńı tvorbě a využ́ıváńı vzdělávaćıch zdroj̊u.str. 3-8. cdrom 4. navrátil, j. (2006): e-learning na čvut. in: sborńık konference belcom 06 spolupráce univerzit při efektivńı tvorbě a využ́ıváńı vzdělávaćıch zdroj̊u.str. 77-80. cdrom geinformatics fce ctu 2006 169 výuka gis na fit vut v brně výuka gis na fit vut v brně martin hrubý department of intelligent systems faculty of information technology, brno university of technology e-mail: hrubym@fit.vutbr.cz kĺıčová slova: gis, výuka, programováńı, grass abstrakt př́ıspěvek pojednává o zavedeńı předmětu gis do výuky na fakultě informačńıch technologíı vut v brně. předmět se vyučuje už druhý rok. vzhledem k programátorskému zaměřeńı naš́ı fakulty je hlavńım posláńım předmětu zatáhnout do gis oblasti daľśı nové programátory. jako technické zázemı́ předmětu gis byl zvolen nástroj grass a to předevš́ım proto, že v době vzniku předmětu jsme nic jiného neměli. v pr̊uběhu letńıho semestru 2005/06 jsme koupili komerčńı nástroj arcgis firmy esri, takže mě jako garanta předmětu čeká rozhodováńı o budoućı formě výuky gis... základńım textem pro př́ıpravu kurzu byla kniha jána tučka “geografické informačńı systémy”. posloužila mi jako odrazový m̊ustek pro návrh osnov předmětu. fakulta informačńıch technologíı, vut naše fakulta vznikla před pár lety, kdy se tehdeǰśı ústav informatiky a výpočetńı techniky odtrhl od fakulty elektrotechniky a informatiky vut v brně. nově vzniklá fakulta se začala hned rychle rozr̊ustat, nab́ırat v́ıc student̊u, budovat nové studijńı programy a podobně. dá se ř́ıci, že studijńı obory a předměty jsou stále ještě ve velkém vývoji a neustálené. výhodou však je, že stav́ıme na naš́ı dlouhé tradici výuky poč́ıtač̊u a programováńı. já pracuji na ústavu inteligentńıch systémů. specializujeme se na poč́ıtačové modelováńı, simulaci a umělou inteligenci. zpráva o existenci gis se k nám dostala v podstatě náhodou. po jisté diskuzi o kompetenci r̊uzných ústav̊u vyučovat tuto problematiku se toho ujal náš ústav. v našem pojet́ı jsou gis technologie modelovým vyjádřeńım reality, na které lze aplikovat teorii poč́ıtačového modelováńı a simulace; gis technologie mohou být provázány s robotikou, agentńı simulaćı, poč́ıtačovými hrami, modely počaśı a podobně. ukázalo se, že pojem “informačńı systém” ve smyslu databáze zde hraje minoritńı význam. studium gis na fit jsme programátorská škola, ke geodézii a kartografii nemáme absolutně žádný vztah a proto se snaž́ım pojmout přednášky o gis programátorským zp̊usobem. gis chápeme jako specifický druh softwaru. jistým základ̊um se však nedá vyhnout, proto přednášky zahrnuj́ı i statě o geografických souřadných systémech, kartografii, základech geografie naš́ı planety a podobně. geinformatics fce ctu 2006 123 výuka gis na fit vut v brně v tomto textu jsou uvedeny témata jednotlivých přednášek. vypadá to, že každé jednotlivé téma by vydalo na samostatný předmět na specializované škole. z tohoto pohledu je náš předmět v podstatě jenom přehledový. chtěl bych ho směřovat do studia analytických algoritmů a nástroj̊u, modelováńı reálných situaćı do podoby gis aplikaćı, zapojeńı podp̊urné elektroniky, provázáńı na daľśı problémy modelováńı a simulace a předevš́ım na programátorský rozvoj gisovských nástroj̊u tedy programů. v tomto ohledu se zdá být grass ideálńı pro svou otevřenost. nab́ıźı se však otázka, jestli je opravdu koncepčńı ho rozv́ıjet. mám v dlouhodobém plánu zkoumat možnosti objektově orientovaných gis model̊u a jejich snadnou propojitelnost. proto se taky s kolegy a studenty budeme snažit položit základ nového experimentálńıho objektově orientovaného gis nástroje nebo alespoň ukázat smysluplnost nebo nesmysluplnost oo pojet́ı gis dat a model̊u. přednášky gis na naš́ı fakultě má semestr 13 přednáškových týdn̊u. snaž́ım se na každé přednášce probrat jedno téma. osnova je následuj́ıćı a nejsṕı̌s taková i několik let z̊ustane: 1. úvodńı seznámeńı s gisy pojem gis, pojmy geoobjekt, prostor (abstraktńı) a vazba pojmu prostor na geografický prostor, atribut. smyslem přednášky je předevš́ım vysvětlit základńı pojmy a motivaci pro celý tento vědecko-technický obor. 2. modelováńı geografického prostoru je ukázána problematika modelováńı geografického prostoru od nejnižš́ı abstrakce ve formě nepřesně změřené fyzické reality, přes jej́ı model ve formě geoidu a dále referenčńıho elipsoidu a zobrazeńı náhradńıho elipsoidu do souřadného systému. proberou se souřadné systémy š́ı̌rka-délka, utm a s-jtsk. s t́ım souviśı historie zjǐst’ováńı polohy na zemi, měřeńı vzdálenost́ı a podobně. 3. modelováńı geo-objekt̊u základńı př́ıstupy k modelováńı prostorových objekt̊u, koncept vektoru a rastru, vektorová topologie, uložeńı vektorových dat. z této přednášky by mělo alespoň intuitivně být student̊um jasné, jaké části reality modelovat vektorově a které rastrem (v závěru semestru chystám přednášku o konkrétńıch př́ıpadových studíıch). je objasněn pojem topologie minimálně jako zp̊usob organizace vektorových dat při ukládáńı. 4. rastrové vrstvy vzhledem k silné vazbě předmětu na nástroj grass je rastrovým vrstvám věnována zvláštńı přednáška. typické aplikace rastr̊u. po úvodńım seznámeńı s koncepćı rastr̊u se prob́ıraj́ı povrchy, digitálńı modely terénu a jejich r̊uzné vyjádřeńı. 5. geografické (gis) databáze r̊uzné generace gis z pohledu databázových systémů, objektově orientované db, postrelačńı databáze, postgis. 6. gis nástroj grass seznámeńı s grassem, uložeńı dat v grassu, lokace, mapsety, monitory, základńı operace, nviz. př́ıprava pro prvńı poč́ıtačové cvičeńı. 7. vstup geo-údaj̊u, základńı restrukturalizace údaj̊u primárńı a sekundárńı zdroje geografických údaj̊u, pr̊uzkum v terénu, fotogrammetrie a dpz, restrukturalizace vektor̊u a rastr̊u převody, změny měř́ıtka. přednáška je také podkladem pro druhé poč́ıtačové cvičeńı, kde se ukazuje geokoordinace naskenovaného kusu mapy a vektorizace vybraných část́ı obrázku. geinformatics fce ctu 2006 124 výuka gis na fit vut v brně 8. analýza (v rastrovém formátu) analýzy geografických dat jako hlavńı smysl gis, dotazy na geodatabázi, reklasifikace a mapová algebra, vzdálenostńı analýzy (buffer, š́ı̌reńı, prouděńı), výškové analýzy (sklon a orientace svah̊u, analýza osvětleńı, př́ımá viditelnost). několik rozsáhleǰśıch př́ıklad̊u v grassu 9. analýza (ve vektorovém formátu), analýza obrazu analýza śıt́ı, zóny dopravńı dostupnosti k obslužným centr̊um. vektorová analýza se vzhledem k nedostatk̊um ukázkových dat necvič́ı. daľśım tématem přednášky jsou ákladńı pojmy z analýzy dat z dpz úprava obrazu, identifikace objekt̊u v obraze, analýza multispektrálńıch dat. v tomto tématu bych chtěl rozhodně přednášky a cvičeńı pośılit. 10. gis ve státńı správě, zaváděńı git do organizaćı (přednáš́ı host dr. jitka machalová z pef mzlu) výměnou za to, že já u nich přednáš́ım rastrovou analýzu v grassu. 11. mapový výstup základńı pravidla pro tvorbu map, základy kartografie, tématické mapy, 3d vizualizace. tady zřejmě bude i mapserver. tato přednáška se poněkud zkvalitńı zavedeńım nástroje arcgis. 12. gps a podobné systémy. dpz pro meteorologii technické parametry, popis principu, rozš́ı̌reńı gps, daľśı podobné systémy, připojeńı gps k poč́ıtači, formát nmea. tato problematika je pro studenty zřejmě atraktivńı (asi se nejv́ıc bĺıž́ı jejich vńımáńı zapojeńı gis do života). druhou část́ı přednášky je popis systému meteosat pro sledováńı meteorologických prostorových proces̊u. chtěl bych v́ıce rozvést problematiku poč́ıtačového modelováńı počaśı, meteorologických map a podobně. 13. case studies zat́ım nenaplněné téma. chtěl bych tu ukázat konkrétńı inženýrské projekty se zapojeńım gis nástroj̊u a technologíı. zřejmě záviśı na dostupnosti dat nebo na spolupráci s jinými odborńıky. cvičeńı cvičeńı jsou zat́ım vedeny na systému grass. proto taky zač́ınaj́ı až po úvodńı přednášce o grassu a nav́ıc v době, když už na přednášce zazněly hlavńı pojmy. cvičeńı se dělá ve skupinkách po cca 20 studentech u poč́ıtač̊u. naše školńı poč́ıtače v laboratoř́ıch jsou naštěst́ı všechny vybaveny os linux (ms-windows je tam taky). instalace grassu je umı́stěna centrálně na file-serveru. menš́ı datasety si studenti vytvářej́ı ve svých domovských adresář́ıch. velký dataset, jako je třeba demonstračńı lokace cr-wgs84 od skupiny českých uživatel̊u grassu, je umı́stěn a použ́ıván souběžně všemi ve speciálńım sd́ıleném adresáři. náplńı cvičeńı je: 1. seznámeńı s grassem základńı operace, výpis mapových vrstev, monitory, zobrazeńı vrstvy, nastaveńı zobrazovaćıho regionu, měřeńı vzdálenost́ı a podobně. ukázka selekce vybraných část́ı vrstev. 2. vstup dat georeferencováńı zadaného obrázku, vektorizace vybraných partíı obrazu, pořizováńı atributových dat, správa databázové části grassu. geinformatics fce ctu 2006 125 výuka gis na fit vut v brně 3. analýza v rastru jako podklady jsou použity demonstračńı datasety z distribuce grassu (spearfish, leics). vzhledem k fatálńımu nedostatku vektorových dat se neprob́ıraj́ı analýzy založené na vektorech. 4. tady zřejmě přijde seznámeńı s arceditorem od esri náplň cvičeńı je podle mě celkem rozumná vzhledem ke stavu kurzu, technických možnostech a našemu hlavńımu zaměřeńı. v budoucnu nejsṕı̌s v́ıce zapoj́ım nástroj arcgis, ke kterému jsme dostali několik dvd geodat pro experimentováńı. nebylo by špatné mı́t cvičeńı na implementaci vybraného analytického algoritmu pro grass nebo arcgis. budeme instalovat taky rozhrańı arcsde, tak si lze představit experimenty s uložeńım a správou geodat na školńıch oraclovských serverech. projekty je u nás zvykem v každém předmětu hodnotit samostatnou studentskou práci na zadané nebo studentem zvolené téma tak zvaný projekt. obt́ıžnost projektu v jednotlivých předmětech je dána obvykle významnost́ı předmětu pro náš studijńı obor nebo prostě jenom nároky jeho vypisovatele-garanta předmětu. v předmětu gis neńı př́ılǐs mnoho alokovaného kreditového prostoru pro obt́ıžná zadáńı. ani je ze student̊u nevymáhám. studenti si své zadáńı voĺı sami z těchto tř́ıd: 1. studijńı ćılem je nastudovat vybranou problematiku v rozsahu převyšuj́ıćım výuku. tato kategorie je studentsky nejobĺıbeǰśı studenti tvrd́ı, že považuj́ı za odpočinek chv́ıli neprogramovat. nacháźım zde mı́sty velmi zaj́ımavé práce (tento rok jsem např́ıklad dostal velmi obsáhlou třicetistránkovou studii modelováńı větr̊u v čr včetně popisu př́ıslušných simulačńıch nástroj̊u a analýzy provozovatelnosti větrných elektráren v čr). 2. implementačńı implementace vybraných algoritmů a formát̊u. dostal jsem např́ıklad hezké pokusy o objektovou gis databázi, implementace rastrových analýz, prohĺıžečky r̊uzných formát̊u a daľśı. 3. infiltračńı ćılem je proniknout do nějaké gisařské firmy a vyzvědět od nich zaj́ımavé detaily o jejich práci (včetně např́ıklad ceńık̊u). 4. gisovské použit́ı gis nástroje pro zpracováńı geodatabáze zadané lokality. ve většině př́ıpad̊u tyto projekty koṕıruj́ı úlohy ze cvičeńı a v mnoha př́ıpadech poskytuj́ı téma student̊um, kteř́ı nechtěj́ı nic speciálńıho hledat nebo řešit. studentské bakalářské/diplomové projekty vypsal jsem několik témat pro studentské bakalářské a diplomové projekty. několik ukázek: 1. databázová podpora gis systému grass ćılem je poněkud narovnat db část grassu, prozkoumat napojeńı postgresu a podobně. v ideálńım př́ıpadě vyrobit něco jako arcsde pro grass. geinformatics fce ctu 2006 126 výuka gis na fit vut v brně 2. databázová podpora v projektech geografického pr̊uzkumu jednoduchá aplikace zobrazuj́ıćı mapové vrstvy na např́ıklad palmu s možnost́ı vrstvy editovat, měřit polohu a podobně. 3. gis systém pro obec implementace gis aplikace pro potřeby obce s možným webovým rozhrańım. 4. navigačńı systémy v geografických pr̊uzkumných akćıch výpočet navigace pro pohybuj́ıćı se objekt. napojeńı na ř́ıd́ıćı prvek (např́ıklad na autonomńı mobilńı robot). 5. objektově orientované geografické databáze lze pojmout na r̊uzných úrovńıch složitosti. zapojeńı topologie, mobilita objekt̊u, analytické operace. implementačńı prostřed́ı na bázi smalltalku, selfu. 6. vektorová analýza v gis systémech souhrná studia a implementace vektorových analýz na obecněǰśı úrovni. knihovna napojitelná na libovolná rozhrańı. možná revize vektorové analýzy v grassu, nové programové rozhrańı. 7. webové rozhrańı pro gis systém grass (diplomka přihlášená na př́ı̌st́ı rok) studium mapových server̊u, nezbytné zásahy do jádra grassu, správa v́ıce připojeńı. rozhrańı ve formě vzdáleného př́ıstupu ke grassu nebo pouze poskytovatele mapových služeb. některé závěry byly již publikovány [1]. 8. systém pro podporu sběru a správy programu rozvoje obćı v čr (bakálařský projekt, momentálně běž́ı) bakalářská práce našeho studenta ve spolupráci se studentem z př́ırodovědy muni. ćılem byl poč́ıtačový program, který bude generovat specifickou dokumentaci (program rozvoje obćı) z veřejně dostupných dat a z uživatelem zadaných dat. výzkum v gis ???? tady bych chtěl naznačit, kde bych plánoval přispět ve výzkumu gis problematiky. je mi jasné, že kvalitu výuky určuje i vědecká angažovanost učitele a to plat́ı speciálně na vysokých školách. současně s t́ım hledám oblast, která by měla návaznost na moje dosavadńı odborné zaměřeńı tedy na modelováńı, simulace a umělou inteligenci. proto mě napadaj́ı dvě oblasti: 1. prostorové modely zapojitelné do modelováńı inteligence hry, robotika. v posledńı době se zabývám matematickým modelem inteligence nazývaným teorie her. jednou z mnoha aplikaćı teorie her je i vývoj poč́ıtačových her, kde se spojuje umělá inteligence entity prováděj́ıćı strategické rozhodováńı a prostorový kontext jej́ı existence. mohlo by být zaj́ımavé modelovat prostorovou představu inteligentńı entity o svém okoĺı. v doméně kooperuj́ıćıch agentńıch systémů, př́ıpadně v robotice je prostorová představa dokonce ještě d̊uležitěǰśı. 2. objektově orientované geosystémy v této oblasti bych chtěl sledovat dva proudy: 3. koncepčnost návrhu a vytvářeńı geodatabáźı, kde objektová orientace má jasné mı́sto, geinformatics fce ctu 2006 127 výuka gis na fit vut v brně 4. mobilitu geo-objekt̊u, tedy snadnou přenositelnost libovolného elementu geografické databáze do zcela obecně jiného prostřed́ı. závěr předmět gis se na naš́ı fakultě, zdá se, uspěšně rozběhl. budu se snažit, aby měl stále hodně student̊u, zaj́ımavých projekt̊u a navazuj́ıćıch diplomových praćı. tento článek měl o něm podat zprávu a možná i někoho inspirovat. reference 1. hrubý martin: web interface for grass geographic information system, in: proceedings of xxviith internation autumn colloquium asis 2005, ostrava, cz, marq, 2005, s. 103-108, isbn 80-86840-16-6 geinformatics fce ctu 2006 128 evropská směrnice inspire evropská směrnice inspire pavla tryhubová department of mapping and cartography faculty of civil engineering, ctu in prague e-mail: pavla.tryhubova@fsv.cvut.cz abstrakt článek byl vytvořen na základě informaćı ze semináře a workshopu organizovaného sdružeńım nemoforum spolu se společným výzkumným centrem evropské komise (ec joint research centre ispra, itálie) a cenia, českou agenturou pro životńı prostřed́ı. tématem semináře a workshopu byla geoinformačńı infrastruktura čr a inspire. česká republika vstoupila do evropské unie (eu) spolu s daľśımi dev́ıti státy 1. května 2004. změny na trhu s geodaty uvnitř evropské unie se dotýkaj́ı i české republiky. hlavńım ohniskem těchto změn je iniciativa inspire. inspire by se měla stát evropskou směrnićı v roce 2007. inspire podporuje harmonizaci prostorových formát̊u dat, dostupnost datových sad a schopnost vyhledat r̊uzné datové sady. v prvńı části článku je shrnut posledńı vývoj uvnitř iniciativy inspire a v druhé části jsou zveřejněny závěry zmiňovaného semináře a workshopu. obecně o inspire inspire byl založen na souboru základńıch pravidel: � data by měla být sb́ırána jednou a držena na té úrovně kde je sb́ıráńı dat nejúčinněǰśı; � mělo by být možné propojit prostorové informace z r̊uzných evropských zdroj̊u a mezi mnoho uživatel̊u a aplikaćı; � mělo by být možné pro informace sebrané na jedné úrovni sd́ıleńı do všech ostatńıch úrovńı; � na všech úrovńıch by mělo být dostatečné množstv́ı geodat a za podmı́nek, které umožńı jejich rozsáhlé použit́ı; � mělo by být snadné naj́ıt, která geodata jsou dostupná, která se hod́ı pro zvláštńı použit́ı a za kterých podmı́nek je mohu źıskat a použ́ıvat; � geodata by se měla stát snadno pochopitelná a interpretovatelná inspire je návrhem (com (2004) 516 finale, 23/7/2004) evropské komise na směrnici pro založeńı infrastruktury pro prostorové informace v eu, která podporuje dostupnost, a př́ıstup k prostorovým informaćım. [1] iniciativa chce zajistit vytvořeńı evropské prostorové informačńı infrastruktury, která zpř́ıstupńı uživatel̊um integrované prostorové informačńı služby. tyto služby by měly dovolit uživatel̊um naj́ıt a zpř́ıstupnit prostorové nebo geografické geinformatics fce ctu 2006 176 evropská směrnice inspire informace z pestré škály zdroj̊u, od mı́stńı úrovně ke globálńı úrovni, interooperabilitńı cestou pro celou řadu použit́ı. uživatelské ćıle inspire zahrnuj́ı politiky, plánovače a evropské manažery, na národńı a mı́stńı úrovni a občany a organizace. [2] prostorová data hraj́ı d̊uležitou roli při rozhodováńı vlády, organizaćı i jednotlivc̊u. vlády potřebuj́ı plánovat politiku pro zemědělstv́ı, pr̊umysl, oblastńı r̊ust, dopravu a bezpečnost a pak potřebuj́ı sledovat postup své strategie a vidět zda nastávaj́ı žádoućı výsledky. mı́t zmapovanou zemi je d̊uležité pro efektivńı vývoj tržńıho hospodářstv́ı. podobné př́ıklady existuj́ı i na evropské úrovni, zvlášt’ když uváž́ıme požadavky evropské komise pro politiku plánováńı a rozhodovaćı strategie, např́ıklad navržeńı dopravńı śıtě nebo sledovańı znečǐstěńı životńıho prostřed́ı. zvládat tyto procesy na evropské úrovni by nebylo možné bez nějaké úrovně harmonizace. každá země má pro své mapy jiná měř́ıtka, jiné souřadnicové systémy, jiná zobrazeńı, některé země maj́ı zat́ım analogové mapy některé maj́ı digitálńı, každá země má jiné formáty dat, atd. evropská komise chce dát právńı rámec pro vytvořeńı a fungováńı prostorové geoinformačńı infrastruktury. inspire se zpočátku zaměřil na potřeby environmentálńı politiky ale postupně se rozšǐruje i do jiných sektor̊u (např. zemědělstv́ı, doprava). inspire zamýšĺı vytvořit infrastrukturu uvnitř evropy, která umožńı větš́ı př́ıstup k dat̊um a k použ́ıváńı prostorových dat. pak by mělo být snadněǰśı naj́ıt, která data existuj́ı, a každá země bude mı́t jisté minimálńı typy dat, ke kterým by uživatelé měli být schopni přistupovat a připojit si požadovanou zájmovou oblast. ćılem inspire, je odpovědět na všechny sporné otázky bude existovat jeden webový portál kde najdete data, která existuj́ı. jestliže momentálně tyto data neexistuj́ı, členské státy eu budou muset takové datové sady vytvořit. datové sady budou vyhovovat standard̊u, které muśı zajistit možnost spojeńı s daty z jiných zemı́ a z r̊uzných měř́ıtek. výhody pro evropskou komisi jsou zřejmé, ale jsou tu také nesporné výhody pro běžného občana. pokud stát dovoĺı větš́ı použ́ıváńı prostorových dat, otevře tak cestu pro tiśıce nových žádost́ı – např́ıklad d́ıky internetu, by občan mohl kontrolovat využit́ı územńıho plánováńı, zapsáńı vlastnictv́ı nebo nalezeńı nejbližš́ı banky s použit́ım mobilńıho telefonu. směrnice inspire uvád́ı požadavky pro tuto infrastrukturu a až se stane realitou, pak tyto požadavky budou předloženy národńı legislativě. proces, který bude trvat několik let, bude ř́ızený evropskou komiśı, ale bude harmonizován s pomoćı národńıch oborových organizaćı pro př́ıpravu a usnadněńı ćıl̊u inspire. aktuálńı stav směrnice návrh byl vytvářen několik let a je výsledkem práce geoinformačńıch expert̊u ze všech členských stát̊u. směrnice je nyńı procháźı fáźı spolurozhodovaćıho procesu. [3] parlament a rada es nedosáhli “prvńıho čteńı” dohody v červenci 2005, ale na zasedáńı rady pro zemědělstv́ı a rybářské oblasti, které se konalo v bruselu 23.ledna 2006 byl přijat společný postoj rady es a směrnice byla představena na plenárńı sch̊uzi v únoru 2006. parlament má ted’ 3 měśıce (plus jeden v př́ıpadě druhého čteńı dohody v radě es) pro vyjádřeńı ke sporným otázkám. to by znamenalo na květnové nebo červnové sch̊uzi, s diskuźı v (parlamentńım výboru) v březnu nebo dubnu. směrnice inspire by mohla vstoupit v platnost na začátku roku 2007. zároveň jako spolurozhoduj́ıćı proces se koná př́ıprava dohodnutého pracovńıho programu pro zahájeńı práćı před přijet́ım směrnice. práce záviśı na práci malé základńı skupiny v evropské komisi a na geinformatics fce ctu 2006 177 evropská směrnice inspire počtu dobrovolných expert̊u z řad členských stát̊u. tito experti vytvoř́ı sadu implementačńıch pravidel pro hlavńı oblasti iniciativy. od počátečńıho návrhu z července 2004 k úplně realizaci směrnice projde inspire třemi fázemi. směrnice inspire se rozv́ıj́ı a bude realizována v každém členském státu eu, česká republika, jako všechny členské státy, bude nucena implementovat řadu akćı. [4] př́ıpravná fáze (2005-2006) v tomto stádiu se právě nacháźıme. jak už bylo řečeno inspire nyńı procháźı spolurozhoduj́ıćı procedurou, evropská komise spolupracuje s radou a evropským parlamentem při vytvořeńı konečného tvaru směrnice. trváńı spolurozhoduj́ıćı procedury je odhadováno na dva roky a po schváleńı se z návrhu inspire stane směrnice evropského společenstv́ı. inspire bude vyžadovat, aby členské státy realizovaly množstv́ı opatřeńı. česká republika jako členský stát, si muśı být vědoma nastávaj́ıćıch závazk̊u a muśı rozumět hledisk̊um zdroj̊u, časovým rámc̊um, a spojeńı na aktuálńı vývoj nsdi. část opatřeńı vyžadovaných směrnićı muśı být realizováno př́ımo v členských státech, zat́ımco jiná vyžaduj́ı v́ıce detail̊u, popsaných v “implementačńıch pravidlech” (implementing rules). v nař́ızeńıch pro členské státy je část o tom, že členské státy muśı být schopny přizp̊usobit se časovému plánu realizace inspire, implementačńı pravidla muśı být samozřejmě dostupná ve správný čas. nyńı je připraveno pět skupin dobrovolńık̊u z členských stát̊u pod vedeńım konsolidačńıho týmu z jrcentra. skupiny: � skupina pro návrh metadat � skupina pro specifikace dat � skupina pro śıt’ové-internetové služby � skupina pro sd́ıleńı dat a služeb � skupina pro monitoring a zpravodajstv́ı implementačńı pravidla budou posuzována komunitou zaj́ımaj́ıćı se o prostorová data (spatial data interest communities sdic) a projdou iteračńım procesem předt́ım, než budou dány organizaćı pověřenou ř́ıdit sdi aktivity (legally mandated organisations lmos), kde budou ověřena pravidla a okomentován dopad a proveditelnost návrh̊u. návrhy pak p̊ujdou do veřejného projednáváńı, aby zahrnovaly všechny možné investory v př́ıpravné fázi. členské státy chtěj́ı hlasovat o finálńı verzi implementačńıch pravidel na daľśım výboru inspire. fáze transpozičńı (2007-2008) po přijet́ı inspire jako směrnice společenstv́ı, členské státy maj́ı obdob́ı dvou let na přeneseńı inspire do jejich národńı legislativy. v této chv́ıli muśı mı́t organizace v členských státech kontakt na evropskou komisi, je nutné zajistit koordinaci na úrovni evropského společenstv́ı, který také zahrnuje kontakt s geinformatics fce ctu 2006 178 evropská směrnice inspire komiśı. implementačńı pravidla, a vytvořené dobrovolnické skupiny budou vypracovávat podrobná opatřeńı, které maj́ı být přijata členskými státy, nebo v některých př́ıpadech komiśı. během transpozičńı fáze bude zapotřeb́ı přijmout implementačńı pravidla členskými státy dle časového plánu směrnice inspire. regulačńı charakter implementačńıch pravidel vyžaduje, aby je komise předala inspire výboru zástupc̊u členských stát̊u, který oficiálně začne s aktivitami na začátku transpozičńı fáze (během tř́ı měśıc̊u od vstoupeńı v platnost). inspire výbor má jako hlavńı úkol pomáhat komisi a předávat názory na návrh realizace implementačńıch pravidel navrhovaných komiśı. tyto názory se muśı odhlasovat. v př́ıpadě inspire, komise může přijmout implementačńı pravidla, jestliže je většina členských stát̊u odsouhlaśı. jestliže je neodsouhlaśı, komise předlož́ı navrhované implementačńı pravidla evropské radě a informuje evropský parlament. vůle rady pak může změnit nepřijet́ı hlasováńım většiny, jestliže se postav́ı proti návrhu, pověřeńı lidé muśı přezkoušet a podat doplněný návrh. jakmile budou implementačńı pravidla přijata, členské státy muśı zajistit jejich aplikaci podle časového plánu inspire. fáze realizace (2009-2013) až bude inspire transponován členskými státy do národńı legislativy, jeho požadavky budou realizovány a bude sledováno dodržováńı časového plánu inspire. koordinace na úrovni evropského společenstv́ı a na úrovni členských státu bude operačńı a zpravodajská zprávy o stavu implementace inspire a dodržováńı členskými státy muśı být podle časového plánu inspire. předpokládaný časový plán tabulka shrnuje milńıky inspire tak jak je vydal estatjrc. [6] milńıky jsou založené na přijet́ı inspire jako směrnice v 2007, samozřejmě existuje riziko, že tyto milńıky nebudou dodrženy: popis datum založeńı výboru inspire (během 3 měśıc̊u od vstoupeńı v účinnost) 2007 přijet́ı implementačńıch pravidel pro vytvořeńı a aktualizaci metadat přijet́ı implementačńıch pravidel pro śıt’ové služby přijet́ı implementačńıch pravidel pro už́ıváńı služeb třet́ımi osobami přijet́ı implementačńıch pravidel pro monitoring přijet́ı implementačńıch pravidel pro př́ıstup a práva pro použ́ıváńı prostorových soubor̊u dat a služeb pro instituce evropského společenstv́ı 2007 zpráva o přijet́ı implementačńıch pravidel 2008 přijet́ı implementačńıch pravidel pro už́ıváńı prostorových datových soubor̊u a služeb třet́ımi osobami 2009 přijet́ı implementačńıch pravidel pro sladěńı specifikace prostorových dat z př́ılohy i. a výměnu dat z př́ılohy i, ii a iii 2009 určeńı zodpovědnosti veřejných úřad̊u pro prostorové datové soubory a služby 2009 geinformatics fce ctu 2006 179 evropská směrnice inspire implementace rámce prostorových datových soubor̊u a služeb veřejnými institucemi 2009 realizace sledováńı implementačńıch pravidel 2009 metadata dostupná pro prostorová data odpov́ıdaj́ıćı př́ıloze i. a př́ıloze ii. prostorová data 2010 śıt’ové služby 2010 prvńı zpráva členských stát̊u komisi eu 2011 nové nebo aktualizované prostorové datové sady se stanou dostupné dle implementačńıch pravidel pro sladěńı specifikace a výměny prostorových dat z př́ılohy i. prostorová data 2011 přijet́ı implenačńıch pravidel pro sladěńı specifikace prostorových dat z př́ılohy ii a př́ılohy iii prostorová data 2011 dostupná metadata z př́ılohy iii prostorová data 2013 nové nebo aktualizované prostorové datové sady se stanou dostupné dle implementačńıch pravidel pro sladěńı specifikace a výměny prostorových dat z př́ılohy ii. a př́ılohy iii. prostorová data 2013 druhá zpráva členských stát̊u komisi eu 2013 seminář a workshop “geoinformačńı infrastruktura v čr a inspire” jaké je v české republice povědomı́ o směrnici inspire ukázal seminář a workshop věnovaný vzájemným vazbám mezi existuj́ıćı geoinformačńı infrastrukturou v české republice a záměry připravované evropské směrnice inspire. seminář a workshop se uskutečnil 14.3 a 15.3. 2006 v konferenčńım sálu čúzk v kobyliśıch. vysoká účast na semináři, kde bylo široké spektrum organizaćı zastoupeno př́ımo vedoućımi pracovńıky, potvrzuje zájem o vývoj inspire. semináře se zúčastnilo 100 účastńık̊u a workshopu 48 účastńık̊u. zastoupena byla jednak veřejná správa představiteli z ministerstev; dále zástupci kraj̊u; lidé z úřad̊u; posluchači a učitelé z vysokých škol ale i soukromé firmy. přednášej́ıćı byli osloveni sdružeńım nemoforum a jrc dodalo osnovu přednášky s předpřipravenými otázkami ohledně inspire a vztah̊um inspire k organizaci přednášej́ıćıho. všichni účastńıćı dostali ve svých materiálech k semináři paṕırový dotazńık rovněž s připraveným dotazńıkem a otázkami k inspire. přednášky a závěry s dotazńıky byly zpracovány a závěry zveřejněny na stránkách nemofora. [7] je zřejmé, že tématicky nebo územně vymezené geoinformačńı infrastruktury jsou chápany jako součást širš́ıho národńıho a evropského kontextu. směrnice inspire je považována za horizontálńı rámec, který umožńı vybudováńı národńı prostorové informačńı infrastruktury v rozsahu překračuj́ıćım stávaj́ıćı kompetenčńı aj. bariéry mezi jednotlivými resorty a institucemi. řada z prezentovaných př́ıklad̊u na národńı i regionálńı úrovni již nyńı odpov́ıdá princip̊um inspire. geinformatics fce ctu 2006 180 evropská směrnice inspire obrázek 1: účast na semináři obrázek 2: účast na workshopu projeveny zájem o aktivńı zapojeńı do př́ıprav inspire je možný uspokojit po třech liníıch: � zapojeńım do procesu tvorby technických prováděćıch předpis̊u inspire, který je koordinován ec jrc. � cestou vytvořeńı a registrace nových sdic a lmo nebo přidružeńım k takto již zaregistrovaným organizaćım (připomı́nkováńım, testováńım návrh̊u, účast́ı v pr̊uzkumu připravenosti v oblasti metadat); viz www.inspire-jrc.it � zapojeńım do procesu př́ıprav implementace směrnice inspire v čr. česká informačńı agentura pro životńı prostřed́ı (cenia) předložila k diskusi rámcový implementačńı plán a dolad’uje návrh umožňuj́ıćı informované zapojeńı zainteresovaných institućı a organizaćı do procesu př́ıprav. geinformatics fce ctu 2006 181 evropská směrnice inspire obrázek 3: graf byl vyhotoven z paṕırových dotazńık̊u obrázek 4: graf byl vyhotoven z prezentaćı přednášej́ıćıch návrh obsahuje zř́ızeńı předimplementačńı skupiny, implementačńıho výboru a navrhuje zahájeńı vyjednáńı. mžp a mi oficiálně přizvou daľśı ústředńı orgány a zástupce kraj̊u k účasti na vyjednáváńı směrnice. mžp vstouṕı do nemofóra, které se tak stane primárńı diskusńı platformou inspire. předimplementačńı skupina bude formálně ustanovena ze zástupc̊u ústředńıch orgán̊u, kraj̊u, obćı a profesńıch sdružeńı pod vedeńım mžp (účel), mi (nadresortnost) a čúzk (hlavńı poskytovatel). výstupem předimplementačńı skupiny bude návrh pro vládu čr: � identifikace hlavńıch hráč̊u (stakeholders) � definice strategických ćıl̊u geinformatics fce ctu 2006 182 evropská směrnice inspire � rozvoj současných aktivit � datová politika čr � legislativńı implementace � komunikačńı a diseminačńı politika inspire � definice organizace implementace inspire � dokumenty budou veřejně projednávány implementačńı výbor zajist́ı provedeńı a bude sledovat pr̊uběh a dávat zpětnou vazbu pro kontinuálńı zlepšováńı. ve vazbě na vývoj v eu a zájmy čr bude dávat podněty vládě čr k úpravám a doplněńı návrhu. závěr výsledky a zkušenosti, které vzešly d́ıky pilotńımu českému semináři a workshopu, poslouž́ı jako kvalitńı a transparentńı základ pro př́ıpravu daľśıch osvětových akćı v nových členských státech eu a přistupuj́ıćıch zemı́ch. z odpověd́ı na dotazńık i debaty vyplynulo množstv́ı návrhu, ze strany jrc a cenia, jak zlepšit informačńı služby zaměřené na inspire a jak pośılit osvětu (jrc, cenia spolu s profesńımi organizacemi, vysokými školami). náměty pro informačńı služby: � mohutněǰśı propagace ve veřejné správě koncepčńı semináře, přednášky na konferenćıch, osvětové letáčky, účasti v r̊uzných projektech (nature regine, projekt modernizace ústředńı správy) � zř́ızeńı diskusńıho fóra na internetu, zapojeńı internetových server̊u města a obce online, a daľśıch. literatura 1. http://inspire.jrc.it/ 2. http://www.ec-gis.org/inspire 3. http://europa.eu.int/comm/codecision/stepbystep/text/index en.htm 4. http://inspire.jrc.it/sdic call/rhd040705wp4a v4.5.3 final-2.pdf 5. council decision (1999/468/ec) official journal l 184/23, 17.7.1999, “laying down the procedures for the exercise of implementing powers conferred on the commission”. 6. working programme preparatory phase – http://inspire.jrc.it/sdic call/rhd040705wp4a v4.5.3 final.pdf 7. http://www.cuzk.cz/nemoforum geinformatics fce ctu 2006 183 http://inspire.jrc.it/ http://www.ec-gis.org/inspire http://europa.eu.int/comm/codecision/stepbystep/text/index_en.htm http://inspire.jrc.it/sdic_call/rhd040705wp4a_v4.5.3_final-2.pdf http://inspire.jrc.it/sdic_call/rhd040705wp4a_v4.5.3_final.pdf http://www.cuzk.cz/nemoforum experience with a livecd in an education process experience with a livecd in an education process jan růžička, frantǐsek kĺımek institute of geoinformatics faculty of mining and geology, vsb-tuo e-mail: jan.ruzicka@vsb.cz key words: livecd, gisák livecd, distance learning, e-learning abstract the paper describes how can be livecd (bootable cd) used for geoinformatics distance learning. we have prepared one livecd with basic software for learning geoinformatics and we have some feedback from users and teachers. the paper should evaluate this feedback. livecd is a cd-rom, that can be used as a bootable device. after booting from the cd, the user can access all resources compiled to the cd. there are operating system (usually based on gnu/linux) and (user, desktop) software installed and configured to be used directly after boot. our cd named gisák livecd contains basic gis software such as umn mapserver, grass, quantum gis, thuban, jump, gps drive, blender and we work on other software packages such as maplab for umn mapserver, postgis, geonetwork open source, catmdedit, gvsig, udig. gisák livecd contains set of spatial data from the czech republic. main part of the cd are tutorials for gis software. cd is open for other e-learning materials. now we have about 20 students using our livecd and few other users that are not curently our students. the paper should show pros & cons of the livecd usage for a distance learning. livecd live cd is bootable cd-rom with operating system and installed and configured programs. it is ready to use as full installed system directly after boot from cd-rom drive. there is more than one hundred distributions of live cd. we can say that there is a few targeted to the gis users. most well known live cd in the gis area is called gis-knoppix. that live cd can be used for education, research, testing, etc. but there are two aspects that made our decision to do not use that live cd in the czech republic. last distribution of the cd is not for free of charge at this time and the cd works with data mainly for the usa. gisák livecd background at the beginning we have to describe how came the idea of the project to our mind. it was not quick process, it comes quite slowly. in the autumn of the 2003 came new wave of thinking to the institute of geoinformatics. before that period we have used open source software only rarely. we used gis open source software for “playing” only, but not for serious work. we worked with grass and umn mapserver. after the autumn 2003 some of us geinformatics fce ctu 2006 64 experience with a livecd in an education process have completely moved to the os linux (thanks to michal and pavel) and we have started using the open source as a platform for our day work. in these days first projects based on open source have fired up. they have been related to the current teaching of some subjects. for example it was gvsb view project in the subject of java programming or it was project that moves subject called “software for gis ii” from close source (and commercial) software to the open source (and free) software. nowadays in a lot of study subjects is used open source software and other teachers are thinking about using open source for teaching. the main reason is (but i only suggest, you have to ask them) that they would like to give a chance to our students to do not use cracked software. common student have not got enough money to buy a licence for gis software (of course there is some commercial software free of charge for students, but usually you have to apply for a grant or do any other non popular things). when student would like to do his homework lessons, he (or she) has to either come to our laboratory (where the licensed software is available) or use cracked software. well, if a teacher think in the way such as: i will prepare my exercises to suit for example grass system, because my students can use it free of charge: we can say that it is not so easy. some of the software are not available for common os (windows) or it is very difficult to install them on it. that reason was one of our impulses for gisák livecd. another impulse came from second level of education. we are cooperating with some high schools and they are thinking about gis. but the software is expensive, data are not available, etc. nowadays we are using open source quite often and once somebody made a question: “what we are giving to open source community? we only use the products and do no give in back any to it”. that was another impulse. that were the reasons in a short review. let us show you list of project’s goals described in a different way than in the abstract of the paper: � give the set of open source software to our students in one compact form � give the tutorial data and set of tools to high schools in one compact form � prepare set of useful tutorials for used software � prepare data from the czech republic available free of charge � advertise open source gis tools � advertise os linux project stages 1. september 2004 first ideas 2. january 2005 first version for internal tests (based on linux4all [3]) 3. april 2005 first official version (based on linux4all) geinformatics fce ctu 2006 65 experience with a livecd in an education process 4. january 200 special edition for gis ostrava 2006 (based on kanotix-mini [2]) 5. october 2006 second version distributed via gi journal (plans) (based on kanotix or knoppix [4] or who knows) figure 1: logo gisák livecd data gisák livecd contains free geodata. there are spearfish dataset and geodata collected by martin landa1. we plan to include some data measured by our students and other geodata from public sources. software gisák livecd contains following software: � grass 6 figure 2: grass 6 1 http://grass.fsv.cvut.cz/wiki/index.ph|p/geodata cz geinformatics fce ctu 2006 66 http://grass.fsv.cvut.cz/wiki/index.php/geodata%5c_cz http://grass.fsv.cvut.cz/wiki/index.php/geodata%5c_cz experience with a livecd in an education process � jump 1.2 figure 3: jump � qgis 0.6 figure 4: qgis geinformatics fce ctu 2006 67 experience with a livecd in an education process � thuban figure 5: thuban � gps drive � umn map server 4.2 � blender we plan to add following software: � udig � gvsig � postgis � geonetwork open source � catmdedit tutorials there are few tutorials for grass, jump, thuban and qgis software, but we plan to add other tutorials and regural materials prepared by other teachers or students. geinformatics fce ctu 2006 68 experience with a livecd in an education process figure 6: umn mapserver figure 7: udig distance, e-learing using livecd we have experiences with a distance learning using e-learning techniques. for example our institute offers three e-learning courses. courses’ students can access learning materials on geinformatics fce ctu 2006 69 experience with a livecd in an education process figure 8: gvsig figure 9: geonetwork open source web (interactive e-learning system barborka [15]) or use static cd with learning materials in multimedia form. but some of the excersies need software installation, configuration and other problematic tasks. we have bad experiences with this way of e-learning. many of our geinformatics fce ctu 2006 70 experience with a livecd in an education process figure 10: catmdedit figure 11: open source gis platform students had difficulties with installing and configuring software available on cd. they have different os, different conditions (for example some of them are not administrators of their pcs used for education and they can not install software) and installing instructions can not geinformatics fce ctu 2006 71 experience with a livecd in an education process handle all platforms and all possible problems. we can minimalize such problems using livecd. all software is installed, configured and students can concentrate on necessary tsaks only. users’ experiences here are listed some of the users’ experiences using gisák livecd: – problems on some pcs/notebooks – usually solved by boot options + better than standalone that was a quite big surprise – problems with importing data v.in.ogr we have tried import dgn and everyting went well, the user did not specify aditional information needed for handling his problem + do not need install whole system to handle one or two subjects + possibility to distribute diploma thesis in a live form users’ requirements here are listed some of the users’ requirements using gisák livecd: � knowledge base – how to for specific tasks – tutorials can go to impasse � write on ntfs � save state users would like to pause their work and continue after new boot with state saved in some permanent memory (flash, hd) � need help with boot options some pcs needs to set up boot options before booting, but it is not so easy for not experienced user to set up them correctly. we should distribute some brochure that describes how to set up boot options for some list of devices teachers’ experiences + very useful on roads + prepared same conditions for all students + compact form – data and software on one place + useful when network is not available teachers’ requirements � tools for updating configuration without burning a new cd – on-line livecd will download configuration during booting from the web site � tools for installing new software on-line make own livecd compilation geinformatics fce ctu 2006 72 experience with a livecd in an education process pros & cons + free available tools + prepared data + prepared configuration workshops, competitions + integrated data with software + useful when network is not available – workshops, competitions – update cd need every time you change the content – never better than standalone installation except some situations conclusion � livecd is useful when other options are not available and that conditions are common � livecd brings same conditions for all students � livecd brings possibilities to integrate data, software and study materials to one compact form � solution based on livecd is limited by cd updating – burning a cd is not so convenient what can do students with livecd students can use livecd to perform following tasks (or will soon) and all of them are available on software from open source gis platform � geodata collecting, updating � geodata storing and distributing � geodata analysis � geodata visualization: desktop, internet, printing � geodata describing: metadata management � geodata converting: coordinate, formats future work � prepare dvd – with more data � prepare usb image – no burning necessary � prepare tools for more user (teacher) friendly cd updating � integrate other study materials geinformatics fce ctu 2006 73 experience with a livecd in an education process figure 12: open source gis platform � prepare other software � ... references 1. růžička j., kĺımek f., děrgel p., šeliga m. gis on linux4all live cd. in sborńık z konference gis ostrava 2005, ostrava, 2005, issn 1213-239x. dostupný na: www2 2. kanotix home page3 kanotix home page 3. linux4all. linux4all. 2004. available on www4 4. source pole. gis-knoppix. 2004. available on www5 5. grass development team. grass gis. 2004. available on www6 6. qgis.org. quantum gis. 2004. available on www7 7. thuban project team. thuban. 2004. available on www8 2 http://gis.vsb.cz/publikace/sborniky/gis ova/gis ova 2005/sbornik/referaty/ruzicka.htm 3 http://kanotix.com/files/kanotix/ 4 http://www.linux4all.de/livecd 5 http://www.sourceple.com/gis-knoppix 6 http://grass.itc.it/ 7 http://www.qgis.org/ 8 http://thuban.intevation.org/ geinformatics fce ctu 2006 74 http://gis.vsb.cz/publikace/sborniky/gis_ova/gis_ova_2005/sbornik/referaty/ruzicka.htm http://kanotix.com/files/kanotix/ http://www.linux4all.de/livecd http://www.sourceple.com/gis-knoppix http://grass.itc.it/ http://www.qgis.org/ http://thuban.intevation.org/ experience with a livecd in an education process 8. the jump project. jump unified mapping platform. 2004. available on www9 9. regents of the university of minnesota. umn mapserver. 2004. available on www10 10. maptools.org. maplab. 2004. available on www11 11. postgis.org. postgis. 2004. available on www12 12. ganter f. gps drive. 2004. available on www13 13. orĺık a., růžička j., stromský j., děrgel p., kamler j. správa časoprostorových dat v prostřed́ı postgresql/postgis, sborńık z konference open weekend 2005, praha 15. 16.10.2005, isbn 800103349x 14. růžička j. workshop open source gis. available on www14 15. trio team 2003 from vsb-tuo. barborka. available on www15 9 http://www.jump-project.org/ 10 http://mapserver.gis.umn.edu/ 11 http://www.maptools.org/maplab/index.phtml 12 http://www.postgis.org/ 13 http://gpsdrive.kraftvoll.at/ 14 http://gis.vsb.cz/ruzicka/seminare/opensource/index.php 15 http://barborka.vsb.cz/lms/ geinformatics fce ctu 2006 75 http://www.jump-project.org/ http://mapserver.gis.umn.edu/ http://www.maptools.org/maplab/index.phtml http://www.postgis.org/ http://gpsdrive.kraftvoll.at/ http://gis.vsb.cz/ruzicka/seminare/opensource/index.php http://barborka.vsb.cz/lms/ quantum gis plugin for czech cadastral data anna kratochvílová and václav petráš students of geoinformatics programme faculty of civil engineering czech technical university in prague abstract this paper presents new quantum gis plugin for czech cadastral data and its development. qgis is a rapidly developing cross-platform desktop geographic information system (gis) released under the gnu gpl. qgis is written in c++, and uses the qt library. the plugin is developed in c++, too. the new plugin can work with czech cadastral data in the new czech cadastral exchange data format called vfk (or nvf). data are accessed through vfk driver of the ogr library. the plugin should facilitate the work with cadastral data by easy search and presenting well arranged information. information is displayed in the way similar to web applications, thus the control is friendly and familiar for users. the plugin supports interaction with map using qgis functionality and it is able to export various cadastral reports. this paper provides ideas which can be generalized to develop qgis plugin dealing with specific data. keywords: vfk, nvf, cadastre, čúzk, gis, qgis, ogr, c++, plugin 1. introduction 1.1. czech cadastre modern czech cadastre has a long history with roots in austria-hungary in the 19th century. since 2001 the czech cadastral office for surveying, mapping and cadastre (čúzk) has provided cadastral data in electronic form via the information system of the cadastre of real estate (iskn) [1]. many organizations and companies, both state and commercial, use this opportunity at an increasing rate. several data formats exist, some of them comply with inspire specifications [2]. the most used format is the czech cadastral exchange data format called vfk (or nvf) which contains all data related to real estate. unlike the other formats vfk contains information about ownership. cadastral data in vfk are widely used by municipalities and state institutions for execution of their duties. for using vfk data proper software is needed. beside some proprietary programs, there have been attempts to provide free software solution: module v.in.vfk for grass gis, otevřený katastr (open cadastre) [3] for import to postgis and vfk driver [4] in ogr library. the last-named has the advantage that gdal/ogr library is used by many programs, both proprietary and free open source. however, typical end-user is not able to access the functionality without some wrapper application providing graphical user interface (gui). geoinformatics fce ctu 8, 2012 91 kratochvílová a., petráš v.: quantum gis plugin for czech cadastral data for working with cadastral data which are basically spatial data an ideal environment is a geographical information system (gis). 1.2. qgis for the development of the application we decided to use quantum gis (qgis) [5]. qgis has many advantages both for the user (e.g. customizable and easy to use gui) and the developer (e.g. good api, simple plugin system). qgis is a free open source gis licensed under gnu gpl1. qgis is written in c++ programming language and uses qt framework2. there are several possibilities how to add new functionality to qgis. firstly, a new plugin for qgis desktop application can be written in c++ or python. secondly, it is possible to build your own application based on qgis library. finally, you can directly modify existing qgis application. the presented plugin was developed for qgis desktop, however also other components of qgis exists (e.g. qgis server, qgis browser). in order to develop a plugin for qgis it is necessary to use qgis api (application programming interface). the api is well documented and a guide for plugin development is available, too. thanks to gpl license everyone who obtains qgis application can obtain also its source codes. as a result one can prove how performed analyses are implemented which is crucial for academic work [6]. for public administration, low cost of free open source software can be a decision criterion. efforts to introduce free open source solutions to public administration can be found on european level.3 2. vfk data format and ogr-vfk driver 2.1. vfk data format the czech cadastral exchange data format called vfk (or nvf) replaced previous format vkm in 19964. files in this format are provided by the czech office for surveying, mapping and cadastre. vfk contains information about real properties (parcels, buildings, building units), including their description, geometry and related property rights. in contrast to other current czech cadastral formats (e.g. inspire cadastral parcels), vfk contains information about owners. vfk files are provided to public for fee, however municipalities and state institutions obtain it free of charge [7]. from the technical point of view vfk is a text format which resembles csv format (see below). &hpar;id n30;stav_dat n2;datum_vzniku d;... &dpar;1319150210;0;"23.06.2003 14:54:44";... &dpar;1319151210;0;"24.04.2002 09:26:01";... 1gnu general public license, http://www.gnu.org/licenses/licenses.html 2http://qt.nokia.com 3https://joinup.ec.europa.eu/ 4http://www.cuzk.cz/dokument.aspx?prareskod=998&menuid=0&akce=doc:10-sdeleni_k_svf geoinformatics fce ctu 8, 2012 92 http://www.gnu.org/licenses/licenses.html http://qt.nokia.com https://joinup.ec.europa.eu/ http://www.cuzk.cz/dokument.aspx?prareskod=998&menuid=0&akce=doc:10-sdeleni_k_svf kratochvílová a., petráš v.: quantum gis plugin for czech cadastral data building parcel ownership report other real estate rights ownership holder of real estate rightsbuilding unit figure 1: simplified relations between entities in database (for full schema please refer to [9]) official format description and underlying database schema is provided, however this documentation [8] is not sufficient for building queries. the description provided in [9] was used as a reference for the plugin development. for schema overview refer to figure 1. 2.2. ogr-vfk driver ogr5 is an free open source c++ library which enables read (and in certain cases also write) access to various geospatial vector formats including esri shapefile, postgis or oracle spatial. ogr is the part of gdal library, therefore it is also referred as gdal/ogr. this library is used in many free open source software projects like grass gis, qgis or mapserver and in proprietary software (e.g. esri arcgis) which is possible thanks to mit style free software license. the support of the czech cadastral exchange format (vfk) was missing till 2010 when m. landa [4] implemented the vfk driver which has then become the part of the library. thanks to the driver every software using ogr library can access czech cadastral data in vfk format. during the qgis vfk plugin development the vfk driver was improved by its author in order to reflect needs of the plugin. the most significant enhancement is the export to sqlite3 file database. this file is then used by the plugin but it can be accessed by any sqlite browsing and editing tools. 3. implementation 3.1. qgis plugin for czech cadastral data qgis plugin for czech cadastral data is primary intended for using by local municipalities. the implementation of vfk driver in ogr library enabled to view czech cadastral data in 5http://www.gdal.org/ogr/ geoinformatics fce ctu 8, 2012 93 http://www.gdal.org/ogr/ kratochvílová a., petráš v.: quantum gis plugin for czech cadastral data qgis, however it was still inconvenient to browse the non-spatial but significant part of the data which is represented by a large and complex set of attribute tables. the aim of the plugin is to facilitate the work with this kind of data which can be then viewed and analysed in relation to the spatial part. the development of the plugin was split into two parts. during the first part we developed an application not connected to qgis. this involved the development of the crucial functionality — database search in sqlite database created by vfk-ogr driver, and generating and exporting various reports. during the second part the application was connected to qgis as a plugin so that the map interaction could be implemented. data are handled by the plugin in the following way. qgis reads spatial data from vfk file through vfk-ogr driver. the driver creates sqlite file, from which plugin reads attributes of spatial features and other related non-spatial data. plugin connects these two sets of data through unique identifiers defined in vfk documentation [8]. in order to use the plugin functionality user is supposed to open the vfk file from within the plugin instead of using the qgis standard dialog for adding vector layers. this enables to optimize loading process comparing to standard qgis ogr layer loading process by avoiding calling certain unnecessary procedures. another improvement affecting the performance is the fact that the driver uses the sqlite database file from previous run if available. during the plugin development crucial attribute columns were identified and vfk-ogr driver now creates database indices for this columns. indices improve the speed of attribute querying in the plugin. these changes were implemented by the author of the vfk-ogr driver. the plugin was developed with development version of qgis and development version of ogr. 3.2. plugin functionality plugin is in the stage of first prototype and its functionality is so far limited. however, it contains basic functionality to solve all common tasks. this functionality includes search according to various parameters depending on searched feature or object. currently it is possible to search information about parcels, buildings and owners. several reports are available: • report about parcels • report about buildings • report about building units (flats or non-residential space) • report about owners • ownership reports (according to czech cadastre style) these reports are interconnected (figure 2) so that the user can easily get from one report to another. plugin contains a browser similar to the standard web browser. this browser enables interactive browsing of attribute data in the following way: geoinformatics fce ctu 8, 2012 94 kratochvílová a., petráš v.: quantum gis plugin for czech cadastral data neighbouring parcels building holder of real estate rights ownership report parcel building unit figure 2: interconnections between the reports • data are represented as an styled html page. • hyperlinks are used for navigation through various reports. • navigation includes buttons back and forward. • browsing history of visited pages does not require to redo database queries. plugin provides the possibility to show current state of parcels and buildings in web application viewing cadastre (nahlížení do katastru nemovitostí, čúzk application) which provides limited access to cadastral data with map interface. this application is launched in system web browser showing currently selected feature. unlike viewing cadastre, vfk plugin provides possibility to search by owner (e.g. find all real estates of one owner). reports generated by the plugin can be exported into two formats — html with css stylesheet and latex(enables creating pdf). html can be easily imported to openoffice.org or libreoffice so that the document structure is preserved. in order to use information from database search or search information about features selected in map the following functionality for synchronizing was developed: • parcels and buildings which are currently shown in plugin browser can be selected (highlighted) in the map. • information about selected feature(s) in the map can be shown in plugin browser. for convenience, cadastral map layers are displayed with predefined style. most importantly it includes displaying parcel numbers with special cadastral formatting (style originally coming from austria-hungary). brief help page is embedded in the plugin. it contains hyperlinks to find wanted functionality easily. it is worth noting that when using the plugin user can profit from all the functionality provided by qgis too (e.g. loading wms layers). geoinformatics fce ctu 8, 2012 95 kratochvílová a., petráš v.: quantum gis plugin for czech cadastral data figure 3: plugin showing information about parcels’ owner figure 4: docked plugin window with hidden control panel geoinformatics fce ctu 8, 2012 96 kratochvílová a., petráš v.: quantum gis plugin for czech cadastral data 3.3. plugin gui the graphical user interface (gui) of the plugin is divided into two main parts — the browser showing data reports and the control panel with toolbar for data import and search. the gui was designed so that new functionality could be added easily without making the gui cumbersome. thanks to powerful qt framework, the plugin window can be floating or docked anywhere in qgis application window (see figures 3 and 4). this is particularly advantageous to users using large screens. this way user can see both the map and related data. 4. further development further development will be based on results of testing. nevertheless, there are some improvements which are already planned: • the direct support of other output formats (pdf, odf) • export of geometry • using threads for time consuming data loading and querying (make the user interface responsive) • database file handling • add more layer styles 5. conclusion the czech cadastral office for surveying, mapping and cadastre manages large database containing both spatial and non-spatial data. these data have been used increasingly since launching internet based remote access in 2001 [1]. the majority of clients is coming from public administration which gets this data free of charge [7]. presented plugin aims to local municipalities which need tools for accessing these data. they can gain advantage from solution which uses free open source software. apart from money saving, they avoid dependency on one certain software supplier and they can get software which respects their needs [10]. the plugin makes use of two free open source projects qgis and gdal/ogr to provide functionality needed for handling czech cadastral data. it uses gis environment to interactively browse spatial and attribute data and bind them together to get clear overview. 6. acknowledgement we are grateful to the author of vfk-ogr driver, ing. martin landa, for additional enhancements of the driver needed for the plugin. we also thank to mr. jiri sobotik from municipality novy jicin who encourages us to start the plugin development and contributes with his ideas about plugin functionality and testing. geoinformatics fce ctu 8, 2012 97 kratochvílová a., petráš v.: quantum gis plugin for czech cadastral data 7. references 1. český úřad zeměměřický a katastrální. annual report 2011. čúzk, 2012. isbn 978-80-86918-66-2. url: http://www.cuzk.cz/generujsoubor.ashx? nazev=10-evz2011 2. souček, petr and jiří formánek. data spravovaná resortem čúzk jsou stále přístupnější. in: gis ostrava 2012 současné výzvy geoinformatiky. url: http: //gis.vsb.cz/gis_ostrava/gis_ova_2012/sbornik/papers/soucek.pdf 3. jedlička, karel, jan ježek and jiří petrák. otevřený katastr – svobodné internetové řešení pro prohlížení dat výměnného formátu katastru nemovitostí. in geoinformatics fce ctu. praha: čvut, 2007. p. 111-117. url: http://geoinformatics. fsv.cvut.cz/gwiki/geoinformatics_fce_ctu_2007 4. landa, martin. ogr vfk driver implementation issues. in: proceedings – symposium gis ostrava 2010. p. 8. isbn 978-80-248-2171-9, issn 1213-239x. url: http: //gis.vsb.cz/gis_ostrava/gis_ova_2010/sbornik/lists/papers/en_1_10.pdf 5. quantum gis development team, 2012. quantum gis geographic information system. open source geospatial foundation project. url: http://qgis.osgeo.org 6. rocchini, duccio and markus neteler. let the four freedoms paradigm apply to ecology. trends in ecology. 2012, 27, vol. 6, p. 310-311. issn 01695347. doi: 10.1016/j.tree.2012.03.009. url: http://www.sciencedirect.com/science/article/ pii/s0169534712000742 7. česká republika. zákon české národní rady ze dne 7. května 1992 o katastru nemovitostí české republiky (katastrální zákon). in: sbírka zákonů české republiky. 1992. url: http://portal.gov.cz/zakon/344/1992 8. český úřad zeměměřický a katastrální. struktura výměnného formátu informačního systému katastru nemovitostí české republiky [online]. 23. 2. 2012 [cit. 2012-04-07]. url: http://www.cuzk.cz/generujsoubor.ashx?nazev=10-d12u. 9. landa, martin. návrh modulu grassu pro import dat ve výměnném formátu iskn. master thesis. 2005. čvut praha. url: http://gama.fsv.cvut.cz/~landa/publications/ 2005/diploma_thesis/martin.landa-thesis.pdf. 10. fogel, karl. tvorba open source softwaru: jak řídit úspěšný projekt svobodného softwaru. 2010. cz.nic, 2012. isbn: 978-80-904248-5-2. url: http://knihy.nic.cz/ files/nic/edice/karl_fogel_poss.pdf. geoinformatics fce ctu 8, 2012 98 http://www.cuzk.cz/generujsoubor.ashx?nazev=10-evz2011 http://www.cuzk.cz/generujsoubor.ashx?nazev=10-evz2011 http://gis.vsb.cz/gis_ostrava/gis_ova_2012/sbornik/papers/soucek.pdf http://gis.vsb.cz/gis_ostrava/gis_ova_2012/sbornik/papers/soucek.pdf http://geoinformatics.fsv.cvut.cz/gwiki/geoinformatics_fce_ctu_2007 http://geoinformatics.fsv.cvut.cz/gwiki/geoinformatics_fce_ctu_2007 http://gis.vsb.cz/gis_ostrava/gis_ova_2010/sbornik/lists/papers/en_1_10.pdf http://gis.vsb.cz/gis_ostrava/gis_ova_2010/sbornik/lists/papers/en_1_10.pdf http://qgis.osgeo.org http://www.sciencedirect.com/science/article/pii/s0169534712000742 http://www.sciencedirect.com/science/article/pii/s0169534712000742 http://portal.gov.cz/zakon/344/1992 http://www.cuzk.cz/generujsoubor.ashx?nazev=10-d12u http://gama.fsv.cvut.cz/~landa/publications/2005/diploma_thesis/martin.landa-thesis.pdf http://gama.fsv.cvut.cz/~landa/publications/2005/diploma_thesis/martin.landa-thesis.pdf http://knihy.nic.cz/files/nic/edice/karl_fogel_poss.pdf http://knihy.nic.cz/files/nic/edice/karl_fogel_poss.pdf geoinformatics fce ctu 12, 2014 55 determination of pavement elevations by the 3d scanning system and its verification tomáš k�emen, martin štroner, pavel t�asák czech technical university in prague, faculty of civil engineering, thákurova 7, 166 29, praha 6, czech republic, tomas.kremen@fsv.cvut.cz martin.stroner@fsv.cvut.cz pavel.trasak@fsv.cvut.cz abstract it is necessary to be careful of geometric accuracy of the roadways when constructing them. correct thickness of the individual construction layers together with roughness of the pavement belongs among important influences ensuring lifetime of the roadways and vehicles and for comfortable and safe car ride. it is necessary beside other things to have a reliable check measurement method at disposal so as to ensure the required accuracy of the individual construction layers will be achieved. the check measurement method must be able to measure a checked construction component with the required accuracy and with sufficiently high density describing not only global deviations, but also local deviations. the highest requirements on accuracy are placed on the final construction layer of the roadway. layer thickness and pavement roughness are being evaluated here. the 3d terrestrial scanning method is currently offered for geometric checking of its realization. the article deals with testing of procedure of the pavement roughness measurement with the 3d terrestrial scanning system and with its verification by a total station measurement. emphasis is put on verification of accuracy of absolute heights of points in the 3d model of the pavement and on size of random errors in the elevation component. results of the testing clarified using the 3d terrestrial scanning systems and their accuracy for check of the roadway surface. key words: 3d scanning, checking measurement, pavement roughness, elevation 1. introduction several survey methods can be used for determination of pavement roughness. they are firstly these methods: precise geometric levelling with level instrument, trigonometric levelling by spatial polar method with total station, 3d scanning methods [1], [2] or photogrammetric method. each of them has characteristic advantages and disadvantages that are important for choosing suitable method for specific job order. accuracy, measurement density, time and labour consumption belong to these characteristics. 3d scanning methods – static and kinematics – are convenient because of measurement density, time and labour consumption [3], but examination of their accuracy suitability is complicated. accuracy of kinematics methods is stated in centimetres [4]. this accuracy is usually suitable only for rough measurement. accuracy of statics methods is stated in millimetres and it is enough for fine measurement. department of special geodesy of the czech technical university in prague was asked by the control system international (csi) company to verify accuracy of the static terrestrial 3d scanning that was used for determination of the road pavement roughness. the aim of the project was design and realization of checking procedure for verification of the scanning method. it was decided to build temporary test field in the area where standard k�emen, t. et al: determination of pavement elevations by the 3d scanning … geoinformatics fce ctu 12, 2014 56 commercial measurement was carried out. the precise trigonometric measurement with a total station was used for determination of spatial parameters of the test field. the field was measured by the csi company 3d scanning method. for comparison, the field was measured by the 3d scanning method of the department of special geodesy as well. 2. locality, survey net the measurements were carried out in the locality of jino�anská spojka – chaby in south-west part of prague. there is four-lane highway with one bridge above the dalejský brook. road length is about 1.2 km. the total length of the measured area was 2.5 km (measurement was carried out in both directions of the road). test field was only in 150 m long part (figure 1). the surveying net was created for needs of the communication building. the surveying net was traverse consisting of six survey points located in the axis of the communication. the coordinates of the points were given by the investor. accuracy of the survey net was checked and then improved for needs of the measurement verification. figure 1: position of the test field 3. instruments the rover trimble 5800 (accuracy 10 mm 1 ppm), the leica tcra 1103 total station (accuracy of angle measurement is 1 mgon and accuracy of distance measurement is 2 mm + 2 ppm * d) and the riegl vz-400 3d scanning system was used for measurement by the csi company. the technical specification of the vz-400 are: standard deviation of distance measurement in the range 100 m is 5 mm, divergence of laser beam is 0.35 mrad, measurement range is up to 160 m for natural targets with more than 20% albedo and angle measurement resolution is better than 0,0005° in both direction. the total station trimble s6 hp robotic (accuracy of angle measurement is 0.3 mgon and accuracy of distance measurement is 1 mm + 1 ppm * d) and 3d scanning system leica hds3000 was used for measurement by the department of special geodesy. the technical specifications of the hds3000 are: standard deviation of distance measurement is 4 mm up to 50 m, spot size is 4 mm up to 50 m, measurement range is up to 134 m for natural targets with more than 18% albedo and angle accuracy in both directions is 60 µ rad. 4. measurement the measurements of the test field and the surface pavement of the road by vz-400 were carried out in 24th september 2013. the weather conditions were overcast and temperature about 12 °c. the measurement of the test field by hds3000 was carried out in 26th september 2013. the k�emen, t. et al: determination of pavement elevations by the 3d scanning … geoinformatics fce ctu 12, 2014 57 weather conditions were overcast during measurement first four areas of the test field and during measurement of the last areas was sunshine. temperature was about 12 °c. 4.1 measurement of the test field the test field was designed as five separated areas. each area was placed on a pavement surface. first two areas were placed on a bridge. the first area was in a two-lane part of the road inwards to the city. the second area was in a two-lane part of the road outwards direction from the city. third and fourth areas were in the same places of the road as the previous two areas but about 85 m further in the direction to the city. the last fifth area was in a two-lane part of the road inwards to the city about 56 m further in the direction to the city than the third area (see figure 1). measurement of the test field was carried out by the precise trigonometric method with the trimble s6 total station. measurement was carried out from one position of the instrument (point no. 5001). the checking points were measured in one set (two positions of the telescope) for elimination of the instrument construction errors and higher accuracy. the measurement was carried out with automatic aiming at omnidirectional prism. special flat peak on the bottom of the measuring rod was used. three survey net points no. 6002, 6003 and 6004 were used for connection of the measurement into the state coordinate systems s-jtsk (position) and bpv (elevation). net of checking points with regular spacing among the points was measured in each area of the field. numbers of checking points are shown in the table 1. table 1: number of checking points in the test field entire field 1. area 2. area 3. area 4. area 5. area number of points 161 50 35 27 28 21 4.2 measurement with vz-400 all parts of the communication were measured by 3d scanning technology owned by the csi company. this is its short description. the technology is founded on “stop and go” 3d scanning with georeferencing by gnss and total station measurements. the scanner vz-400 is mounted on the mast which is fixed to the car. the gnss rover trimble 5800 is mounted on the top of the scanner. surrounding of the scanner is measured by the scanner and position of the scanner is measured by the gnss rover. area with radius 50 m is measured from one position of the scanning system. density was bigger than 2000 points in 1 m2 on the ground (spacing 2 cm x 2 cm). next position of the system is usually 30 – 40 m further. next measurement is carried out by total station. the control surfaces are measured along both sides of the road in regular spacing about 20 m. these control surfaces are used for correction of the final 3d model elevation. other detailed informations about measurement with this technology are business secret. 4.3 measurement with hds3000 scanning with system hds3000 is common static terrestrial measurement. the system hds3000 was used only for measurement of test field areas. each area of test field was scanned from one position of scanner. the density of scanning was set as 10 mm in horizontal and 5 mm in vertical direction of field of view in the distance 10 m. it means that density of points in the pavement surface was bigger than 25 mm in longitudinal direction and 10 mm in transversal direction within the meaning of scanning direction. the range of measurement was from 3 m to 11 m in each area. twelve control points were placed around the scanned areas. four control points were placed around the first and the second area, next four control points were placed around the third and the fourth area and the last four control points were placed around the fifth area. the first four control points were measured by the trimble s6 total station for connection into reference systems from k�emen, t. et al: determination of pavement elevations by the 3d scanning … geoinformatics fce ctu 12, 2014 58 the first free station (point 4001) and next eight points were measured by the same total station from the second free station (point 4002). the points 6003 and 6004 were used for connection of free stations to reference systems. resection and detailed measurement were measured in one set. 5. processing 5.1 determination of test field points coordinates the coordinates of test field points were firstly calculated in local coordinate system. calculated heights were corrected from influence of earth curvature =d2/2r, where d is horizontal distance and r is earth’s radius. then all points were transformed from local coordinated system into state reference systems. the points no. 6002, 6003 and 6004 were used for calculation of transformation key. the accuracy of the given elevations was low therefore it was made more precisely by the transformation. new elevations of the net’s points were used for the next calculation. 161 checking points were determined. accuracy of checking points’ height is better than 1 mm. this accuracy was determined from many previous measurements of displacements of bridges that were carried out by the department of special geodesy and from analysis of atmospheric refraction [5]. 5.3 processing of vz-400 measurement measured data were transformed into the state reference systems by means of gnss measurement and then they were adjusted by icp algorithm in the “multi station adjustment” function in the riscan pro software. the measured data were reduced. density of detailed points was 25 points / 1 m2 after reduction. elevations of the detailed points were adjusted by means of control surfaces measured by the total station. last operation was creation of locality digital terrain model (dtm). elevation accuracy of this model was verified. 5.4 processing of hds3000 measurement measured data were transformed into the state reference systems. control points measured by the total station were used. elevations of the total station were corrected from influence of earth curvature . areas of test field were cut from the transformed point clouds. dtms of areas were calculated. no other processing of point clouds was made. 5.5 verification of laser scanning technologies accuracy a special software for comparison of elevations of the test field checking points and elevations of dtms from scanning in the same places was created for verification of laser scanning technologies. this software was created by our colleague ing. bronislav koska, ph.d. the software calculates high difference �hi between elevation of checking point hcpi and elevation of dtm hdtmi in the same place: i cpi dtmi h h h∆ = − . (1) high difference sample standard deviation s�h is calculated for accuracy analysis: 2 1 1 n i h h s n ∆ ∆ = − � , (2) k�emen, t. et al: determination of pavement elevations by the 3d scanning … geoinformatics fce ctu 12, 2014 59 where n is number of high differences. the s�h describes trueness and precision of the measurement with 3d scanning system. trueness t of the measurement is calculated as arithmetic mean of �hi: 1 n i h t n ∆ = � . (3) the precision p of the measurement is calculated as sample standard deviation from high differences: 2 1 1 n i v p n = − � , (4) where vi is: i i v t h= − ∆ . (5) the trueness gives us information about systematic error of 3d scanning transformation into the high reference system. the precision gives us information about high random errors of 3d scanning. 6. results the results of comparison of dtm elevation from 3d scanning and elevation of test field checking points are shown in the table 2. there are height difference sample standard deviation s�h, trueness t and precision p for whole test field and for particular areas of the field for used 3d scanning technology. the results show that inner accuracy of the 3d scanning technology is around 1 mm and that hds3000 is little better than vz-400. results of hds3000 in 5th area are worse than previous areas. it is probably caused by atmospheric refraction because sun came out from clouds during the measurement of this area. the entire accuracy of both 3d scanning is worse than accuracy in some particular areas. it is caused firstly by the systematic errors in transformation into the height reference system. main part of the determined height differences for the 3d scanning with vz400 is caused by wrong transformation into the height reference system. table 2: results all values are in mm entire field 1. area 2. area 3. area 4. area 5. area hds3000 s�h 1.57 0.72 1.26 1.08 1.45 3.39 t 0.52 0.15 -0.21 -0.53 1.24 2.95 p 1.48 0.71 1.24 0.94 0.72 1.54 vz-400 s�h 4.04 4.40 5.03 1.36 2.09 5.63 t 3.55 4.23 4.79 1.00 1.89 5.37 p 1.90 1.07 1.29 0.89 0.81 1.19 k�emen, t. et al: determination of pavement elevations by the 3d scanning … geoinformatics fce ctu 12, 2014 60 7. conclusion the verification of static 3d scanning accuracy showed that both checked static 3d scanning methods are sufficient for works where demanding standard deviation in elevation is 5 mm. the inner accuracy of 3d scanning is better than 2 mm but general error is influenced by errors in transformation into height reference system. total station measurers’ carefulness and calculation with all reduction especially with correction from influence of earth curvature are very important because this correction can be up to several millimetres. another source of the errors could be atmospheric refraction. acknowledgements the article was written with support of the internal grant of czech technical university in prague sgs14 “optimization of acquisition and processing of 3d data for purpose of engineering surveying“. references [1] k�emen, t. kašpar, m. pospíšil, j.: operating quality control of ground machines by means of the terrestrial laser scanning system. in: image engineering and vision metrology [cd]. dresden: isprs, 2006, issn 1682-1750. [2] k�emen, t. pospíšil, j. koska, b.: laser scanning for checking earth moving works. in: ingeo 2008 4th international conference on engineering surveying [cdrom]. bratislava: department of surveying, sut in bratislava, 2008, isbn 978-80-2272971-0, s. 1-10. [3] štroner, m. pospíšil, j. koska, b. k�emen, t. – urban, r. smítka v. t�asák p.: 3d skenovací systémy. praha, eská technika nakladatelství vut, 2013, isbn 978-80-01-05371-3. [4] koska, b.: calibration of profile laser scanner with conical shape modification for autonomous mapping system. in: proceedings of spie – videometric, range imaging, and applications xii; and automated visual inspection, munich, 2013, isbn 978-0-81949607-2. [5] urban, r. michal, o.: analýza technologie pro ur�ování pr�hybové �áry mostních konstrukcí. grant journal [online]. 2013, ro�. 2, �. 2, issn 1805-062x. mapserver vs. mapserver mapserver vs. mapserver jáchym čepický department of geoinformatics faculty of forestry and wood technology, mendel university of agriculture and forestry in brno e-mail: jachym.cepicky@centrum.cz david procházka department of informatics faculty of business and economics, mendel university of agriculture and forestry in brno e-mail: xproch17@pef.mendelu.cz jitka machalová department of informatics faculty of business and economics, mendel university of agriculture and forestry in brno e-mail: machalov@mendelu.cz kĺıčová slova: wms, mapserver, arcims abstrakt m̊užeme ř́ıci, že mapy prož́ıvaj́ı d́ıky moderńım technologíım svou renesanci. dı́ky aplikaćım, jako je google maps1 či seznam-mapy2 mohou uživatelé śıtě náhle pracovat s geoinformacemi zp̊usobem, na jaký doposud nebyli zvykĺı a tento nový zp̊usob je bav́ı. pro vytvořeńı obrázku mapy, který je bud’ zobrazen v gisu nebo v okně prohĺı̌zeče je potřeba mı́t stroj odpov́ıdaj́ıćıho výkonu a programové vybaveńı schopné takový obrázek vytvořit. tento článek se snaž́ı pomoci nalézt odpověd’ na otázku “jaký mapový server je pro má data nejvhodněǰśı?“. mapové servery mapové servery jsou programy generuj́ıćı požadovanou mapu jako obrázek na základě požadavk̊u klientských programů. výsledný obrázek předávaj́ı webovému serveru, a ten pak zpět klientskému programu. primárńı funkćı mapového serveru je nač́ıst data z r̊uzných zdroj̊u a jejich spojeńı dohromady do výsledného obrázku [1]. na ,,data“ poskytovaná mapovými servery lze přistupovat bud’ ze specializovaných programů geografických informačńıch systémů a nebo na př́ıklad z klientských aplikaćı napsaných pro webové prohĺıžeče. lze předpokládat, že uživatele obou typ̊u těchto programů zaj́ımá kromě vzhledu výsledného obrázku předevš́ım rychlost s jakou se tento obrázek objev́ı u nich na obrazovce. 1 http://maps.google.com 2 http://mapy.seznam.cz geinformatics fce ctu 2006 101 http://maps.google.com http://mapy.seznam.cz mapserver vs. mapserver správce takových server̊u zase zaj́ımá zátěž, které je vystaven stroj, na němž jsou uložena data a na němž prob́ıhá vykreslováńı výsledných obrázk̊u, at’ už z hlediska zat́ıžeńı disk̊u, tak z hlediska zátěže procesoru. komunikace mezi klienty a mapovými servery může prob́ıhat bud’ pomoćı proprietárńıho rozhrańı a nebo přes rozhrańı standardńı. standardem v př́ıpadě mapových server̊u je tzv. služba wms3 (web mapping service), definovaná konsorciem ogc4 (open geospatial consorcium). web mapping service ogc je mezinárodńı standardizačńı sdružeńı, zabývaj́ıćı se předevš́ım standardy v oblasti geografických informačńıch systémů, jejich výměnných formát̊u a podobně. mezi standardy definované touto organizaćı patř́ı mimo jiné wms (web mapping service), wfs (web feature service), wcs (web coverage service), gml (geography markup language) a daľśı. web mapping service5 je definovaná v dokumentu ogc 06-042 [2], který popisuje komunikaci mezi mapovým serverem a klientskou aplikaćı. mezi dotazy, které muśı takový server být schopen spracovat patř́ı mimo jiné � getcapabilities, vrát́ı popis dostupných dat na mapovém serveru, jejich formát̊u, geografické projekce a daľśı informace � getmap, který vrát́ı výsledný obrázek mapu podle zadaných vstupńıch parametr̊u. protože právě wms je rozhrańı, na jehož základě komunikuje většina programů stahuj́ıćıch z mapových server̊u data (arcgis6, qgis7, udig8, grass9, umn mapserver10, ...), zaměřili jsme se v tomto testu právě na toto rozhrańı. test k vlastńımu testu byl použit server hp proliant ml 350t03 (http://indica.mendelu.cz11), s nainstalovaným operačńım systémem ms windows 2003 sp1. vybavený je dvěma 73gb scsi disky v konfiguraci raid-1, 2gb pamět́ı, procesorem intel xeon 3 3.06ghz (32b). důvod pro zvoleńı tohoto operačńıho systému byl, že server se použ́ıván i pro běh licenčńıho manažeru pro produkty esri, který je pouze pro ms windows. jako webový server byl, na základě doporučeńı firmy esri, zvolen apache 2.0 s nadstavbou tomcat 5.0 a jre 1.4.2. 3 http://en.wikipedia.org/wiki/web map service 4 http://opengeospatial.org 5 http://en.wikipedia.org/wiki/web map service 6 http://www.esri.com/software/arcgis/ 7 http://qgis.org 8 http://udig.refractions.net/confluence/display/udig/home 9 http://grass.itc.it 10 http://mapserver.gis.umn.edu 11 http://indica.mendelu.cz geinformatics fce ctu 2006 102 http://en.wikipedia.org/wiki/web%5c_map%5c_service http://opengeospatial.org http://en.wikipedia.org/wiki/web%5c_map%5c_service http://www.esri.com/software/arcgis/ http://qgis.org http://udig.refractions.net/confluence/display/udig/home http://grass.itc.it http://mapserver.gis.umn.edu http://indica.mendelu.cz mapserver vs. mapserver použitá data k testu byla použita data u oblasti školńıho lesńıho podniku křtiny ,,masaryk̊uv les“. územı́, na kterém byl test prováděn je ohraničeno souřadnicemi 16d35’30.12“e 49d13’5.52”n a 16d48’44.64“e 49d21’18.72”n (wgs84). pro test byly použity vrstvy: � letecké sńımky 50 leteckých sńımk̊u v infra-červeném spektru v rozlǐseńı 0.5 m, jejichž barevná paleta byla redukována na 256 barevných odst́ın̊u. pr̊uměrná velikost jednoho rastru je 1.89 mb (formát tiff + tfw) � digitálńı model terénu jednotná rastrová mapa v rozlǐseńı 5 m (formát tiff + tfw) � mapa využit́ı p̊udy vektorová mapa ve formátu esri shapefile, obsahuj́ıćı 32488 liníı a 11647 ploch rozdělených do 13 kategoríı. servery byly nastaveny tak, aby se mapová vrstva vykreslovala bez popisk̊u s pouze vybarvenými plochami. � typologická mapa vektorová mapa ve formátu esri shapefile, obsahuj́ıćı 14731 liníı a 5282 ploch rozdělených do 133 kategoríı. mapová vrstva byla nastavena tak, aby se plochy vykreslily pomoćı šraf (rastrový obrázek velikosti 2x2 pixely) s popiskami jednotlivých kategoríı. od p̊uvodńıho záměru, testovat data v zobrazeńı s-jtsk jsme upustili z d̊uvodu problémů s t́ımto zobrazeńım v knihovně proj.4 (kterou využ́ıvá umn mapserver ) na operačńım systému ms windows a problémů při konfiguraci arcims. data tak byla pomoćı nástroj̊u gdalwarp a ogr2ogr převedena do zobrazeńı lat/long (referenčńı elipsoid wgs84). pro umn mapserver byly ještě letecké sńımky opatřeny souborem obsahuj́ıćım ,,mapu“ jednotlivých leteckých sńımk̊u tile index. použité mapové servery university of minnesota mapserver12 (současný název je pouhé mapserver ) je asi nejrozš́ı̌reněǰśı open source a free software gis program. je vyv́ıjen a udržován početnou komunitou uživatel̊u. lze jej spouštět bud’ jako cgi aplikaci nebo pomoćı rozhrańı mapscript zapracovat do r̊uzných programovaćıch jazyk̊u. pro účely testu byl použit předkompilovaný mapserver verze 4.8.1, stažený ze stránek projektu, který byl spouštěn jako cgi. druhým použitým mapovým serverem je arcims13 firmy esri. nativńım formátem pro komunikaci serveru je arcxml. pro testy bylo použito arcims verze 9.1 sp1, ve které integrován wms connector – nadstavba napsaná v jazyku java, která umožňuje publikovat data ve formátu wms. wms connector přistupuje k mapových službám běž́ıćım na serveru a de facto pro ně vytvář́ı wms rozhrańı. vlastńı test vlastńı test byl proveden skriptem k tomuto účelu vytvořeným. skript byl napsán v programovaćım jazyce python. k měřeńı času potřebného k vytvořeńı obrázku s mapou byl využit 12 http://mapserver.gis.umn.edu/ 13 http://www.esri.com/software/arcgis/arcims/index.html geinformatics fce ctu 2006 103 http://mapserver.gis.umn.edu/ http://www.esri.com/software/arcgis/arcims/index.html mapserver vs. mapserver modul timeit s funkćı repeat. tato funkce14 potřebuje ke svému běhu dva parametry: � počet opakováńı tohoto testu � počet voláńı testované funkce vrámci jednoho testu. obě hodnoty byly nastaveny na 10. funkce vraćı č́ıselné pole, obsahuj́ıćı počty sekund potřebné k provedeńı každého testu (v našem př́ıpadě pole o deseti prvćıch). výsledná č́ısla jsou tedy počty sekund potřebných k vytvořeńı, stažeńı a uložeńı 10 obrázk̊u s mapami. nelze ř́ıci, že se jedná o počet sekund potřebných k vytvořeńı obrázku. celkový čas je kromě parametr̊u na straně serveru závislý na prostupnosti śıtě, rychlosti ukládáńı souboru na straně klienta a dobou potřebnou k vykonáńı samostatné funkce. funkce save file dostane jako sv̊uj parametry vždy uri, ze kterého má stahovat potřebná data. ještě před t́ım, než započne se stahováńım dat, jsou hraničńı souřadnice upraveny o náhodnou hodnotu. ćılem této úpravy bylo zamezit nač́ıtáńı odpověd́ı z cache na straně serveru. [...] t = timeit.timer("""mapservervsmapserver.save_file( "%s", verbose=%d, bbox="%s", mapserv="%s" )""" %\ (uri,verbose,bbox,mapserv), "import mapservervsmapserver") times = t.repeat(10,10) [...] number = 0 def save_file(uri, verbose=0, bbox=none, mapserv=""): # bbox randomization newbox = "" rand = random.random()/100-0.005 for cord in bbox.split(","): newbox += str(float(cord)+rand)+"," newbox = newbox[:-1] uri += "&bbox=%s" % (newbox) global number number += 1 if verbose == 3: print number 14 http://diveintopython.org/performance tuning/timeit.html geinformatics fce ctu 2006 104 http://diveintopython.org/performance%5c_tuning/timeit.html mapserver vs. mapserver if verbose > 3: print number, uri map = urllib.urlopen(uri) input = open(’map-%s-%03d.png’ % (mapserv,number),’wb’) input.write(map.read()) input.close() return testovány byly následuj́ıćı varianty: � ,,jednoduchý“ rastrový soubor celého územı́ (digitálńı model terénu) � ,,jednoduchý“ rastrový soubor celého územı́ (digitálńı model terénu) výřez územı́ � ,,náročná“ vektorová mapa celého územı́, včetně rastrových textur a popisk̊u (typologická mapa) � ,,náročná“ vektorová mapa celého územı́, včetně rastrových textur a popisk̊u (typologická mapa) výřez územı́ � ,,náročný“ rastrový soubor celého územı́ letecké sńımky � ,,náročný“ rastrový soubor celého územı́ letecké sńımky výřez územı́ � kombinace ,,náročného“ rastru s ,,náročnou“ vektorovou mapou na celém územı́ letecké sńımky + typologická mapa � kombinace ,,náročného“ rastru s ,,náročnou“ vektorovou mapou na celém územı́ letecké sńımky + typologická mapa výřez územı́ � ,,jednoduchá“ vektorová mapa celého územı́ (mapa využit́ı p̊udy) � ,,jednoduchá“ vektorová mapa celého územı́ (mapa využit́ı p̊udy) výřez územı́. celé územı́ bylo ohraničeno (výchoźımi) souřadnicemi 16d35’30.12“e 49d13’5.52”n a 16d48’44.64“e 49d21’18.72”n. výřez pak 16d42’7.128“e 49d17’14.388”n a 16d42’39.996“e 49d17’35.412”n. obrázky byly stahovány ve formátu png o velikosti 400×400 pixel̊u. výsledky sekvenčńı dotazy z jednoho klienta na server tyto testy popisuj́ı řady sekvenčńıch dotaz̊u z jednoho klienta na mapový server. pro kladeńı dotaz̊u na mapový server arcims jsou použity porty 80 a 8080. pokud je dotaz položen přes port 80, převezme jej webový server apache, ten jej předá tomcatu a následně se zavolá samotný wms connector (který źıská data z běž́ıćı mapové služby arcims ). pokud je dotaz položen přes port 8080, prob́ıhá komunikace př́ımo s nádstavbou tomcat a je tedy ušetřen čas komunikace s web serverem. geinformatics fce ctu 2006 105 mapserver vs. mapserver dmt dmt (detail) typo– logie typo– logie (detail) ortho ortho (detail) ortho + typologie ortho + typologie (detail) land– use land– use (detail) arcims (port 8080) 7,52 5,61 14,73 5,82 27,23 8,88 34,59 8,62 37,90 6,38 arcims (port 80) 14,51 7,89 21,37 8,38 48,61 26,27 50,15 23,17 41,03 8,10 mapserver 4,38 3,28 32,38 3,00 20,65 4,66 46,37 4,85 6,55 3,28 tabulka 1: srovnáńı rychlosti vykreslováńı r̊uzných typ̊u vrstev servery obrázek 1: jak rychle je daná vrstva zpracována r̊uznými servery (větš́ı čas znamená horš́ı výsledek) sekvenčńı dotazy ze serveru na server tyto hodnoty popisuj́ı dotazy pokládané ze stejného poč́ıtače, jako je ten, na kterém je instalován mapový server. porovnáńım s předchoźımi hodnotami źıskáme vliv komunikace po śıti na výsledné časy. geinformatics fce ctu 2006 106 mapserver vs. mapserver obrázek 2: jak rychle dokáž́ı servery zpracovávat jednotlivé vrstvy dmt dmt (detail) typo– logie typo– logie (detail) ortho ortho (detail) ortho + typo– logie ortho + typo– logie (detail) land– use land– use (detail) arcims (port 8080) 3,65 2,89 12,74 2,95 24,43 5,63 31,80 5,78 35,52 2,83 arcims (port 80) 12,84 3,64 19,10 5,17 49,37 28,20 51,50 22,85 38,82 4,27 mapserver 2,39 1,67 31,20 1,53 17,17 2,98 44,76 3,39 4,50 1,21 tabulka 2: srovnáńı čas̊u nutných odpovědi při kladeńı dotaz̊u pouze v rámci serveru (neprob́ıhá komunikace po śıti) geinformatics fce ctu 2006 107 mapserver vs. mapserver obrázek 3: jak rychle je daná vrstva zpracována r̊uznými servery obrázek 4: jak rychle dokáž́ı servery zpracovávat jednotlivé vrstvy porovnáńı sekvenčńıch dotaz̊u nı́že uvedená tabulka a graf shrnuj́ı rozd́ıly mezi dotazy klient-server a server-server. je patrné, že zpožděńı při dotazech přes poč́ıtačovou śıt’ je ve většině př́ıpad̊u téměř konstantńı. t́ımto testem se nav́ıc vzájemně ověřili i naměřené hodnoty. je patrné, že při opakovaném měřeńı dosahujeme obdobných výsledk̊u. hodnoty bez č́ısla 2 jsou naměřené při kladeńı ze samotného serveru. hodnoty s č́ıslem 2 (např. arcims (port 80) 2) jsou naměřeny při kladeńı dotaz̊u z klienta na server. geinformatics fce ctu 2006 108 mapserver vs. mapserver dmt typologie ortho ortho + typologie landuse arcims (port 80) 12,84 19,10 49,37 51,50 38,82 arcims (port 80) 2 14,51 21,37 48,61 50,15 41,03 arcims (port 8080) 3,65 12,74 24,43 31,80 35,52 arcims (port 8080) 2 7,52 14,73 27,23 34,59 37,90 mapserver 2,39 31,20 17,17 44,76 4,50 mapserver 2 4,38 32,38 20,65 46,37 6,55 tabulka 3: srovnáńı vlivu komunikace po śıti na dobu nutnou k odpovědi obrázek 5: porovnáńı dotazu z klient-server a ze server-server porovnáńı sekvenčńıch a paralelńıch dotaz̊u následuj́ıćı tabulka srovnává délky odpověd́ı v př́ıpadě kladeńı dotaz̊u z jednoho pc, samotného serveru a paralelně z 10 r̊uzných pc. zobrazená naměřená hodnota pro 10 pc je pr̊uměrem naměřených hodnot ze všech pc (rozd́ıly mezi hodnotami z pc byly ve všech uvedených př́ıpadech pod 10%). položky označené “single” popisuj́ı dotazy server-server. od testovańı arcims přes port 80 bylo pro paralelńı zátež bylo upuštěno, protože je z předchoźıch výsledk̊u zcela evidentńı, že se pro takovéto nasazeńı nehod́ı. geinformatics fce ctu 2006 109 mapserver vs. mapserver dmt typologie ortho ortho + typologie single arcims (port 8080) 3,65 12,74 24,43 31,80 single arcims (port 80) 12,84 19,10 49,37 51,50 single mapserver 2,39 31,20 17,17 44,76 1xpc arcims (port 8080) 7,52 14,73 27,23 34,59 1xpc arcims (port 80) 14,51 21,37 48,61 50,15 1xpc mapserver 4,38 32,38 20,65 46,37 10xpc arcims (port 8080) 16,98 80,90 187,68 250,05 10xpc mapserver 18,31 242,06 83,68 287,84 tabulka 4: srovnáńı vlivu sekvenčńıho a paralelńıho kladeńı dotaz̊u obrázek 6: porovnáńı sériového a paralelńıho kladeńı dotaz̊u závěr vyvozovat závěry z naměřených výsledk̊u je vždy problematické. do značné mı́ry zálež́ı na úhlu pohledu a interpretaci. nav́ıc každý test může být napadnut z hlediska ne zcela objektivńı metodiky měřeńı, použitých prostředk̊u nebo konfiguraci daného produktu. primárńım ćılem tohoto článku proto bylo poskytnout výtah z námi źıskaných dat a popsat podmı́nky, za kterých byla tato data naměřena. na základě zde zmı́něných výsledk̊u lze bez újmy na objektivitě ř́ıci: geinformatics fce ctu 2006 110 mapserver vs. mapserver � jak se dalo předpokládat, je markantńı rozd́ıl mezi rychlost́ı zpracováńı požadavku u produktu arcims při kladeńı dotaz̊u přes port 80 a port 8080. v př́ıpadech, kdy je webový server apache vypuštěn a dotaz je položen př́ımo tomcatu (standardně port 8080), je požadavek vyř́ızen často i v polovičńı době. pro nasazeńı s vysokou zátěž́ı serveru je tedy velmi vhodné zvolit tuto variantu. � otevřený projekt mapserveru je pro poskytováńı mapových služeb minimálně konkurenceschopných řešeńım komerčńımu produktu arcims. jak je patrné z výsledk̊u slabým mı́stem mapserveru je mapováńı v souborech uložených textur do vektorové mapy (vrstva typologie). při práci s rastry i vektorovými podlady však dosahuje velmi zaj́ımavých výsledk̊u. relativně ńızký výkon arcims je pochopitelný, pokud vezmeme v potaz fakt, že wms connector je napsán v javě a nemůže tedy dosáhnout takové efektivity, jako kdyby byl předkompilován pro určitou platformu. tedy výkon wms connectoru je do značné mı́ry dán i výkonem tomcatu. � horš́ı výsledky při zpracováńı rastrových map u obou řešeńı by mohly být teoreticky vylepšeny vybudováńım pyramid (u produktu arcims např. uložeńım rastru do arcsde15). � sledováńı zátěže serveru v pr̊uběhu test̊u potvrdilo předpoklad, že zat́ımco při zpracováńı rozsáhlých rastrových dat je limitńım faktorem rychlost disku, při zpracováńı vektorových dat je limitńım faktorem rychlost procesoru. bylo by bezesporu zaj́ımavé, porovnat i daľśı známé a v praxi často použ́ıvané mapové servery, zejména geoserver16, deegree17, nově uvolněný produkt mapguide18 či český topol internet server19. při daľśım testováńı hodláme rovněž porovnat rychlosti zpracováńı dat v nativńıch formátech jednotlivých řešeńı. reference 1. mitchell, tyler (2005): web mapping illustrated, o’reilly media, inc., sebastopol. 2. beaujardiere, jeff (2006): opengis(r) web map server implementation specification, open geospatial consortium inc., ogc(r) 06-042, version 1.3.0, http://www.opengeospatial.org 15 http://www.esri.com/software/arcgis/arcsde/ 16 http://docs.codehaus.org/display/geos/home 17 http://www.deegree.org/ 18 http://www.autodesk.com/mapguide 19 http://topol.cz/?doc=2400 geinformatics fce ctu 2006 111 http://www.esri.com/software/arcgis/arcsde/ http://www.esri.com/software/arcgis/arcsde/ http://docs.codehaus.org/display/geos/home http://www.deegree.org/ http://www.autodesk.com/mapguide http://topol.cz/%3fdoc%3d2400 http://topol.cz/%3fdoc%3d2400 http://www.opengeospatial.org temperature effects on the bridge structure during the all-day monitoring ondřej michal, rudolf urban department of special geodesy, faculty of civil engineering czech technical university in prague thákurova 7, 166 29 prague 6, czech republic ondrej.michal@fsv.cvut.cz, rudolf.urban@fsv.cvut.cz abstract in current time the large amount of pre-stressed bridge structures is used. their horizontal and vertical displacements are well predicted, but for verification of theoretical results is necessary to measure real displacements of these structures depending on external conditions. given by the complexity of the design and by the inhomogenity of external influences (especially temperature of the atmosphere, insolation, wind speed, etc.) cannot yet be reliably determined the changes of the construction caused by the immediate state of the environment and to distinguish them from irreversible (permanent) deformation of the structure. in this paper the deflection line of the bridge of general chábera over river labe during the all-day monitoring will be analysed. there is dense coverage of stabilized points enabling accurate approximation of the displacement of the bridge structure. the paper is focused especially on temperature effects on the bridge structure. the temperature changes cause the deformation of the construction not immediately, but with the time shift between change of temperature and structure deformation. although the points are stabilized on both sides of the bridge deck, for the analysis of results were used only the points on the left side of the main span, where the biggest vertical displacements was detected. for testing of dependence of the time shift of the structure deformations and structure temperature the pearson coefficient of correlation was used. keywords: deflection line, pearson correlation, time shift. 1. introduction measurement of vertical displacement of building structures was usually solved by precise leveling, which has sufficient accuracy for significant determine of deformation according the standard čsn 73 04 05. this standard is concerned with long-term displacements and the methodic respond that. nowadays modern total station has almost the same accuracy and the measurement is faster than leveling. on the other hand, spatial polar method is more sensitive on outer conditions,especially atmospheric refraction. therefore is still used less than leveling [6]. spatial polar method is used in measurement of vertical and longitudinal deformation of bridges. in [1] was used trigonometric method for the determination of longitudinal deformations with very high (sub-millimeter) accuracy. the determination of the vertical deformation geoinformatics fce ctu 14(1), 2015, doi:10.14311/gi.14.1.6 79 http://orcid.org/0000-0002-3338-3779 http://dx.doi.org/10.14311/gi.14.1.6 http://creativecommons.org/licenses/by/4.0/ o. michal and r. urban: temperature effects on the bridge structure from the changes of zenith angle was used in [10], where the influence of refraction was investigated. the results from [11] and [9] demonstrate that spatial polar method has sufficient accuracy for determining of vertical displacements of bridges respecting correct procedures suppressing the atmospheric refraction. the measured points can be difficult to access during the measurement of deflection. therefore new methods with passive reflection are used. displacements with high frequency can be measured by ground-based (gb) radar interferometry. this new technology can be effectively used for measuring of steel bridges with high accuracy, because there is no need to stabilize and signalized the measured points [2]. gb radar interferometry can be used for concrete structures also. in this case the observed points have to be signaled by the corner reflectors. accuracy of measurement stays still very high [8]. deformations of complex structures can be measured by laser scanning with advantage. this method provides large amount of data, after the statistical processing can be obtained the displacements of parts of structure. unfortunately, the accuracy is still worse than in case of the other methods [3]. the aim of this paper is investigating of the vertical displacements in relatively short time. the measurement has to be fast, so the spatial polar method was used. the structure was monitored for 24 hours in 24 epochs. temperature of the air and the structure was observed during measurement. linear dependence was expected between temperature changes and displacements with unknown time shift between temperature and structure change. the time of the experiment was chosen carefully, the weather conditions were the same several days before and after the monitoring. temperature changed continuously during day and night period and the same periodicity was expected in displacements. 1.1. general chabera bridge the bridge crosses the river elbe near the town litoměřice in czech republic. the bridge (figure 1 and figure 2) is the part of the road ii/247 – connection of the industry area prosmyky to d8 highway. the superstructure is designed as a continuous beam with box girder cross-section. total length of the structure is 584.5 m. it is divided to 7 spans with lengths 43 + 64 + 72 + 90 + 151 + 102 + 60 m – [4]. 2. material and methods the trimble s6 robotic instrument (standard deviation of the horizontal direction and zenith angle measurement is σφ = σζ = 0.3 mgon, standard deviation of the distance measurement is σd = 1 mm + 1 ppm×d) with a relevant omnidirectional reflection prism was used for the measuring (on figure 3). s6 is a robotic total station with automatic targeting system. range pole has special flat heel, which ensures the same height of the target in all epochs. the bridge structure was measured by the space polar method. automatic targeting on omnidirectional prism was used. for deflection line time and temperature analysis it was necessary to determine its shape in time during the day. there was monitored 16 points on the left side of the main span during geoinformatics fce ctu 14(1), 2015 80 o. michal and r. urban: temperature effects on the bridge structure figure 1: longitudinal section of superstructure figure 2: cross section of superstructure the period of 24 hours. points are stabilized by metal leveling nails in bridge structure. the spacing between them is ten meters, it is sufficient for deflection line approximation. figure 3: trimble s6 robotic atmospheric conditions during measurement were extreme and so were the expected vertical changes of the monitored points, it was good for experiment. the difference between maximum and minimum structure temperature was over 20° c. weather conditions are summarized in figure 4. the deflection line of the main span was measured by spatial polar method. there was one connecting point on the left bank of the river and two control points at the end of the bridge construction – situation during measurement is shown on figure 5. the predicted displacements were less than 10 millimeters. the standard deviation of determined vertical point’s displacement of 1 mm was then required, therefore the maximum horizontal distance to observed point was reduced to 80 m [7]. temperature of the atmosphere at various heights above the bridge was measured to verify of the refraction influence. measurement of all points took forty minutes, so deflection line was intended in epoch every one hour, whole experiment lasts 24 hour and therefore 24 epochs were geoinformatics fce ctu 14(1), 2015 81 o. michal and r. urban: temperature effects on the bridge structure 20 40 60 t em pe ra tu re [° c ] june 6th 18:00 june 7th 00:00 june 7th 06:00 june 7th 12:00 june 7th 18:00 980 990 1000 p re ss ur e [h p a] pavement temperature roadway temperature air temperature absolute pressure figure 4: weather conditions during measurement connecting point m easured profile direction l itom erice direction d 8 temporary station control points labe 0 100 m 28 13 figure 5: configuration of measurement (areal image © geodis) geoinformatics fce ctu 14(1), 2015 82 o. michal and r. urban: temperature effects on the bridge structure measured. first epoch was measured at 19:00, june 6th 2013, last epoch at 18 o’clock next day [5]. the heights of all points were calculated in local vertical datum (correction for earth curvature was included) using simple trigonometric functions: hi = hpb + spb cos zpb −si cos zi (1) hi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . height of point i. hpb . . . . . . . . . . . . . . . . . . . . . . . . . . height of connecting point. spb . . . . . . . . . . . . . . . . . . slope distance to connecting point. si . . . . . . . . . . . . . . . . . . . . . . . . . . . . slope distance to point i. zpb . . . . . . . . . . . . . . . . . . . . zenith angle to connecting point. zi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . zenith angle to i-point. 3. results vertical displacements reached values up to 10 millimeters maximally on points in the middle of the main span. presentation of all values from all epochs would be unclear and not very interesting, only one epoch with biggest displacements is presented here for idea about their values table 1. in next section the relation between displacements and temperature will be demonstrated. 3.1. relation between displacement and temperature in the same time first were calculated pearson correlation coefficients between vertical displacements and temperature of air and structure in the same time. calculation was applied only on eight points in the middle of main span, where the displacements were the biggest, all over 6 mm. the result, however, was not according to the expectations. correlation coefficients of these eight points are close to zero – table 2, therefore they was not calculated for the remaining points. very low values of correlation coefficients were caused by unknown time shift between temperature and structure changes. the time shift is visible in figure 6. extremes of both quantities are in different time – maximum of displacement is circa 5 hours after minimum of temperature. this time shift will be investigated. table 1: the epoch with biggest displacements σd = 1.1mm point number epoch 14 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 displacement [mm] 2.4 2.3 3.3 4.5 5.2 6.2 6.7 7.3 7.0 6.6 5.8 4.9 4.0 2.9 2.1 2.3 table 2: correlation coefficients between displacements and temperature point 24 23 22 21 20 19 18 17 correlation coeficients 0,10 0,08 0,05 0,05 0,02 0,04 0,01 0,03 with air temperature correlation coeficients 0,10 0,08 0,05 0,05 0,02 0,04 0,01 0,03 with structure tempertature geoinformatics fce ctu 14(1), 2015 83 o. michal and r. urban: temperature effects on the bridge structure june 7th 06:00 june 7th 00:00 june 7th 06:00 june 7th 12:00 -20 0 20 june 7th 18:00 -0.01 0 0.01displacement maximal displacement minimum temperature temperature vertical displacement figure 6: time shift between displacements and temperature changes a) b) 0 5 10 15 20-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 time shift [h] p ea rs on co rr el at io n point 28 point 27 point 26 point 25 point 24 point 23 point 22 point 21 point 20 point 19 point 18 point 17 point 16 point 15 point 14 point 13 0 5 10 15 20-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 time shift [h] p ea rs on co rr el at io n point 28 point 27 point 26 point 25 point 24 point 23 point 22 point 21 point 20 point 19 point 18 point 17 point 16 point 15 point 14 point 13 figure 7: time shift between displacements and a) air temperature changes b) structure temperature changes 3.2. time shift between temperature and vertical displacement correlation function (from correlation coefficients) was used for determination of time shift between temperature changes and displacements of structure. the displacements were measured every hour, while the temperature was monitored almost continually. correlation coefficient was calculated with time shift from 0 to 24 hour with step 0.5 hour. the course of correlation function depending on the time shift was obtained. for both air temperature and structure temperature were correlation functions calculated. graphs of functions were presented in figure 7. these graphs are point symmetric and seem to be periodical, which corresponds to the assumptions. the same functions are presented in figure 8 in 3d graph. here we can see that extremes of correlation function are greater in the middle of the main span, where displacements were greater, too. geoinformatics fce ctu 14(1), 2015 84 o. michal and r. urban: temperature effects on the bridge structure 18 23 28 0510 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 13 25 20 15 p oin t n um be r time shift [hh] c or re la ti on co effi ci en ts correlation -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 figure 8: time shift between displacements and temperature changes in 3d figure assuming, that correlation function is a period function with the period one day long we get two extremes of correlation function. if we compute exact time of both extremes, we get time shift and its supplement to one period. that gives us 2 values of time shift for each point. values of correlation function in its extremes were compared with critical value of pearson correlation coefficient (2). rk = ( fk n− 2 −fk )1 2 (2) rk . . . . . . . . . . . . . . .critical value of correlation coefficient. n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .degrees of freedom. fk . . . . . . . . . . . . . . . . . . . . . . critical value of f distribution. for n = 24 (24 epochs of measurement) is rk = 0, 413. if the value of correlation function was lower than its critical value, time shift from this point was not used for next computations. correlation coefficient was significant at all points except point 28, which is situated above the support and its displacement was very small. in table 3 are values of extremes of correlation function and appropriate time shift for relation with temperature of air and structure both. there are both extremes of correlation functions and appropriate time shift. the time shift from minimum of correlation function fluctuate around five hours and the time shift from maximum of correlation function around 19 hours, which is supplement to 24 hours. the time shift from all points is very consistent. its values fluctuate between 5 and 6 hours. only on point 13 the time shift is 8 and a half hour. but point 13 is situated identically like point 28 above the support and maximum of correlation function only slightly outperforms the critical value. geoinformatics fce ctu 14(1), 2015 85 o. michal and r. urban: temperature effects on the bridge structure table 3: determining of time shift between temperature and structure deflections point number 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 average shift [h] a ir r 0.30 0.55 0.63 0.77 0.84 0.88 0.87 0.88 0.87 0.87 0.84 0.77 0.71 0.63 0.40 0.40 5.4 r -0.33 -0.57 -0.64 -0.79 -0.86 -0.90 -0.89 -0.91 -0.89 -0.89 -0.87 -0.80 -0.74 -0.66 -0.44 -0.42 5.6 shift [h] 18.5 18.5 18.0 18.0 18.0 17.0 17.0 17.0 17.0 17.0 17.0 18.0 18.0 5.5shift [h] 6.0 6.0 5.0 5.5 5.5 5.5 5.5 5.0 5.5 5.0 5.5 5.5 6.0 6.0 8.5 st ru ct ur e r 0.33 0.59 0.66 0.81 0.86 0.90 0.89 0.90 0.89 0.89 0.86 0.81 0.76 0.69 0.47 0.46 4.9 r -0.29 -0.51 -0.59 -0.76 -0.82 -0.86 -0.86 -0.87 -0.86 -0.85 -0.84 -0.77 -0.70 -0.61 -0.42 -0.37 5.0 shift [h] 19.5 19.0 19.0 19.0 19.0 19.0 19.0 19.0 19.0 19.0 19.0 19.0 19.5 19.5 20.5 5.5shift [h] 6.0 6.0 5.0 5.5 5.5 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.5 5.0 resultant time shift was calculated as a weighted average. the values of extremes of correlation functions were used as weight. the time shift between displacements of structure and their cause is five hours. 4. conclusion the direct correlation was found between vertical displacements and changes of both the air and structure temperature using pearson correlation coefficient. it was found that structure reacts to temperature changes with quite a delay. this time shift was calculated using course of the correlation function. its values significantly outperform the critical value and are close to 1, which indicates functional dependency. this time shift depends on the thermal transmittance of the structure and temperature difference and is not generally valid. it is necessary to consider the effect of this delay during precise measurement of bridges with large span like load tests. it may cause major inaccuracies and completely invalidate otherwise properly conducted measurement. acknowledgement the article was written with support of the internal grants of czech technical university in prague sgs15 “optimization of acquisition and processing of 3d data for purpose of engineering surveying“ references [1] jaroslav braun and martin štroner. “geodetic measurement of longitudinal displacements of the railway bridge”. in: geoinformatics fce ctu 12 (june 2014), pp. 16–21. doi: 10.14311/gi.12.3. [2] ján erdélyi et al. “monitoring of a concrete roof using terrestrial laser scanning”. in: geoinformatics fce ctu 13 (dec. 2014), pp. 25–30. doi: 10.14311/gi.13.3. [3] ján erdélyi et al. “určovanie posunov a pretvorení železobetónových konštrukcií pomocou tls”. in: geodézia, kartografia a geografické informačné systémy. košice: technical university, berg faculty, 2012. isbn: 978-80-553-1173-9. geoinformatics fce ctu 14(1), 2015 86 http://dx.doi.org/10.14311/gi.12.3 http://dx.doi.org/10.14311/gi.13.3 o. michal and r. urban: temperature effects on the bridge structure [4] václav kvasnička. “bridge over elbe river in litoměřice”. in: structural concrete in the czech republic 2006-2009: national report of the czech republic : 3rd fib congress washington. prague: czech concrete society, c2010, pp. 34–37. isbn: 978-80-903806-0-8. url: http://www.metrostav.cz/pdf/reference/cbs_nz2010_06_print.pdf. [5] ondřej michal. “analysis of deflection of the bridge construction”. master thesis. czech technical university in prague, 2015. url: http://geo.fsv.cvut.cz/proj/dp/2015/ ondrej-michal-dp-2015.pdf. [6] ondřej michal and rudolf urban. “analýza technologie pro určování průhybové čáry mostních konstrukcí”. in: grant journal 2.2 (2013). issn: 1805-062x. url: http : / / www.grantjournal.com/issue/0201/pdf/0201urban.pdf. [7] martin štroner and miroslav hampacher. zpracování a analýza měření v inženýrské geodézii. praha: ctu publishing house, 2011, p. 313. isbn: 978-80-01-04900-6. [8] milan talich. “přesné monitorování svislých průhybů mostních konstrukcí metodou pozemní radarové interferometrie”. in: xii. mezinárodní konference geodézie a kartografie v dopravě, olomouc 4.-5.9. 2014. český svaz geodetů a kartografů, 2014, pp. 75– 88. isbn: 978-80-02-02553-5. [9] rudolf urban and martin štroner. “measurement of deflection line on bridges”. in: reports on geodesy and geoinformatics 95.1 (jan. 2013). doi: 10.2478/rgg20130013. [10] rudolf urban, martin štroner, and václav jurga. “development of bridge deflections in the 24-hours cycle”. in: ingeo 2014. vol. 1. české vysoké učení technické v praze, 2014, pp. 155–160. isbn: 978-80-01-05469-7. [11] lukáš vráblík, martin štroner, and rudolf urban. “measurement of bridge body across the river labe in melnik”. in: acta montanistica slovaca (2009), pp. 79–85. issn: 13351788. geoinformatics fce ctu 14(1), 2015 87 http://www.metrostav.cz/pdf/reference/cbs_nz2010_06_print.pdf http://geo.fsv.cvut.cz/proj/dp/2015/ondrej-michal-dp-2015.pdf http://geo.fsv.cvut.cz/proj/dp/2015/ondrej-michal-dp-2015.pdf http://www.grantjournal.com/issue/0201/pdf/0201urban.pdf http://www.grantjournal.com/issue/0201/pdf/0201urban.pdf http://dx.doi.org/10.2478/rgg-2013-0013 http://dx.doi.org/10.2478/rgg-2013-0013 integrating drools and r software for intelligent map system jan ruzicka institute of geoinformatics vsb – tu of ostrava jan.ruzicka vsb.cz keywords: expert system, map sheet evaluation, drools, r software, ontology abstract the paper describes intelligent map system that allows to check errors in map sheets or to help with a map sheet creation. the system is based on expert system drools, ontology created in protége and statistical software r. prototype of the system should evaluate that this kind of integration is possible, so the system is not full of rules. the prototype is filled with twenty rules written in drl language and with more than thirty items from the ontology. the paper should show how all of these components can be integrated together to allow such kind of a map sheet evaluation. the system is now used for selection of the best method for data classification. the selection is suggested by drools system that uses r software to perform statistical tests of normality and uniformity. introduction the world of cartography is changing, we can see it in any map that is available on the web these days. any internet user can create own map without any basic knowledge about cartography. there are tools for a map creation available free of charge and geodata available free of charge as well. when the tool for a map creation keeps the process of a map creation under its supervision, the resulting map is usually correct in a term of cartography rules. when the tool gives a lot of options how to create the map, the map is usually full of mistakes. this is described in [1]: "process of making map is core of the whole cartography, but not only specialists are making maps nowadays. in last years, this process not involved only the cartographers, but also the common users. production of map with using adequate software is a simple process now, which is used by non-cartographic users. these users do not know basic cartographic rules for making maps and they make maps intuitively. this situation needs the implementation of principles of cartography directly into the map production systems in pursuit of correct and effective maps producing. instead of the final map is also important the explanation and the proposing of several possible solutions. according to progress in the artificial intelligence, the knowledge-base systems can be applied for this problem. these systems can partly substitute a role of the expert in this process." we have decided to research possibilities how to create intelligent map system, that can help in a process of a map creation. several tools has been inspected and tested for purposes of the system development. we have discovered that such system can not be simply created with one tool, but that several independent systems should be integrated together. this article describes integration of expert system and statistical system. geoinformatics fce ctu 2011 85 ruzicka j.: integrating drools and r software for intelligent map system aim of the system aim of the system is to help with a map creation for users that are not familiar with cartographic rules. the system can help in two ways: • answer a question in a process of a map sheet creation, • check a created map sheet for mistakes. when the user creates a map there are always steps where he/she must do a decision. for example which size of a font to use for a title of the map or which classification method to use for creating classes breaks. the user just simply (or sometimes not so simply) answers to system questions and obtains recommendations how to finish the step of the map creation. the answers can be filled in a simple graphical user environment with items such as text field or check box. several answers can be derived from the data used by user for the map creation. a similar approach has been used in the descartes project [2] and we just adopted it to our project. an another way, not yet researched in deep in any founded article, is based on check existing (created) map for mistakes. in this approach the system obtains the map from the user and analyses its content. when it is needed, the system can ask the user for original data. the map is checked according to cartographic rules. this approach is mentioned in [3], but the system mentioned in the paper was not tested and not even developed. a result of the check of the map for mistakes can be of three types: • a list of mistakes and suggestions how to avoid them, • a map without mistakes based on the original map, • a map without mistakes based on the original data and original map. the simplest way is to provide the user with a list of mistakes and some suggestions how to avoid them. we can generally declare that system described in this article works according to this simplest way. the more difficult is to repair the map. to be able to repair the map there must be meet several conditions: • the map must be available in the form of file, that uses structures that can help with simple map repair (e.g. scalable vector graphics format), • the mistakes must be from the selected types, not all mistakes can be automatically repaired, • the original data must be available. the aim of the system is not to create it so flexible that it is able to find any mistake in a map, but it should be able to find several most horrible mistakes to help with a map quality improvement. for example to avoid creation of maps such as on the following figure (figure 1). pilot project focus the pilot project is focused only on selected part of cartography techniques namely choropleth maps and cartograms. it has been tested on atlas of fire protection in the czech republic geoinformatics fce ctu 2011 86 ruzicka j.: integrating drools and r software for intelligent map system figure 1: a map with several mistakes (ministry of interior). the atlas allows to create a choropleth map or a cartogram based on statistical database of events that required fire brigade action. the atlas allows to specify following conditions: • year from/to of events, • type of events (e.g. fire where were injured fireman), • statistical method for generating class intervals (jenks, equal interval, etc.), • number of classes, • type of frequency (square km, population), • start colour, end colour for classes visualization. the user must specify these conditions. selection of the statistical method is in the pilot project now based on the intelligent map system. the resulting map can be as on the following figure (figure 2). system architecture the system is based on integration of several items listed on the following figure (figure 3). the process of answering to the question which classification method to use is covered by following steps: • client (any soap/rest capable – in our pilot project the client is the atlas) sends data for classification to service. geoinformatics fce ctu 2011 87 ruzicka j.: integrating drools and r software for intelligent map system figure 2: choropleth map from the atlas figure 3: system architecture • the service reads an ontology (available in owl format) and creates objects that will be placed in a session of an expert system based on drools. • when is created an instance of a class named statisticalvaluesgeo, the data from the client are stored into the instance. • after the data are stored in the instance the instance creates r software instance and runs tests of the data in the r software instance. • the service creates the session of the expert system and fires all rules on the session. geoinformatics fce ctu 2011 88 ruzicka j.: integrating drools and r software for intelligent map system • results of the all rules run is stored in the infocontainer class. • the service reads results from the infocontainer class and returns response containing the results to the client. ontology the used ontology is created in protége software. the ontology is created with regard to limits of export to java classes. the export is done via protége-owl-api that has several limits when exporting ontology. so the ontology is just a simple hierarchy with super-classes and sub-classes. the classes have defined attributes with a data type definition and a cardinality relationship between class and attribute. class statisticalvaluesgeo the class statisticalvaluesgeo extends class statisticalvalues, that is defined in the ontology. the extension is based on reaction to the process when data used for a map are stored within this class. in that moment is tested their statistical distribution. the distribution is tested only for three possible models: • normal distribution. • uniform distribution. • other distribution. the test of distributions is done in software r via tool rjava (jri – http://rosuda.org/rjava/). the tool rjava is a java native interface to r software. normal distribution the normal distribution is tested with shapiro test (module shapiro.test). when the value of resulting w is more than 0.95 and value of resulting pvalue is more than 0.05 then the data are identified as they have normal distribution. see the following code for details. private static boolean testnormality(rengine re) { long e=re.rniparse("shapiro.test(p)", 1); long r=re.rnieval(e, 0); rexp x=new rexp(re, r); rvector rv =x.asvector(); x = rv.at(0); double w = x.asdouble(); x = rv.at(1); double pvalue = x.asdouble(); if (w > 0.95 && pvalue > 0.05) { return true; } else { return false; } } geoinformatics fce ctu 2011 89 http://rosuda.org/rjava/ ruzicka j.: integrating drools and r software for intelligent map system uniform distribution the uniform distribution is tested with kolmogorov – smirnov test (module ks.test). for the purposes of the test is created uniform distribution based on minimum and maximum from the tested data distribution when the value of resulting d is less than 0.1 and value of resulting pvalue is more than 0.05 then the data are identified as they have uniform distribution. see the following code for details. private static boolean testuniformity(rengine re) { rexp x = re.eval("y=c(min(p):max(p))"); long e=re.rniparse("ks.test(y, p)", 1); long r=re.rnieval(e, 0); x=new rexp(re, r); rvector rv =x.asvector(); x = rv.at(0); double d = x.asdouble(); x = rv.at(1); double pvalue = x.asdouble(); if (d < 0.1 && pvalue > 0.05) { return true; } else { return false; } } drools the drools system is used for testing defined cartographic rules. the rules are written in drl language. for example of decision which classification method to use are used following three rules. rule "normaldistribution" when n_statisticalvaluesgeo ( distribution == "normal" ) then infocontainer.method = "equalinterval"; end rule "uniformdistribution" when n_statisticalvaluesgeo ( distribution == "uniform" ) then infocontainer.method = "quantile"; end rule "otherdistribution" when n_statisticalvaluesgeo ( distribution == "other" ) or \ n_statisticalvaluesgeo ( distribution == "unknown" ) then infocontainer.method = "natural"; end there are also rules to test when is used another classification method that is recommended by the system according to detected distribution. rule "normaldistributionclassificationscheme" when not equalintervalscheme() and n_statisticalvaluesgeo ( distribution == "normal" ) then geoinformatics fce ctu 2011 90 ruzicka j.: integrating drools and r software for intelligent map system infocontainer.addmessage("[10] when is distribution of the data normal, the equal interval \ classification scheme should be used"); end rule "uniformdistributionclassificationscheme" when not quantilescheme() and n_statisticalvaluesgeo ( distribution == "uniform" ) then infocontainer.addmessage("[11] when is distribution of the data uniform, the quantile \ classification scheme should be used"); end rule "otherdistributionclassificationscheme" when not jenksscheme() and n_statisticalvaluesgeo ( distribution == "other" ) then infocontainer.addmessage("[12] when the distribution of the data is not normal or linear, \ the jenks classification scheme should be used"); end problems there were several problems with integration of r software with drools system. integration to jboss when is system run in a single user environment, then there is not any problem. when we decided to place the prototype into jboss application server (that is our primary application server for running drools system in multiple user environment), then the r engine instance does not end in the correct way. there was not still the time to fixed this problem so we use simple workaround. there is used solution based on stateless cgi interface that is called from jboss engine. where to implement call of r after several test it is not still clear where to place code for running r software. more general solution could be based on separated class, that will work as a proxy for classes that would like to use capabilities of r software. this solution should be used only when the problem with integration to jboss will be fixed. at a moment is all code written in the class statisticalvaluesgeo. server with older version of r the server dedicated for our test purposes uses old version of r software and it is difficult to move to newer version. so we had to handle two problems: • the older version does not support function assign. we fixed this with simple convert array to vector. • the r engine must be run with –vanilla parameter. this was quite hard to find out, because nobody mentioned this problem on any discussion forum. geoinformatics fce ctu 2011 91 ruzicka j.: integrating drools and r software for intelligent map system conclusion as a part of our research we did integration of drools with r software. our findings are simple, but possibly valuable: • the integration is possible, but at the moment not with a good performance (cgi workaround) • the solution based on integration of drools and r software allows in the future to use another functions from r. • r engine can be replaced with another tool (the solution is not directly dependent on the r engine). references 1. brus j., dobesova z., kanok, j.: utilization of expert systems in thematic cartography. in: badr y., caballe s., xhafa f., abraham a., gros b. (ed.), incos ’09 proceedings of the 2009 international conference on intelligent networking and collaborative systems. pp. 285–289. ieee computer society washington, dc usa. isbn 978-0-7695-3858-7 (2009) 2. andrienko, g., andrienko, n.: knowledge engineering for automated map design in descartes. in: advances in geographic information systems, ed. by medeiros, c.b., 7th international symposium acm gis’99, kansas city, november 1999 (acm press, new york 1999) pp. 66-72 (1999) 3. růžička, j.: pomohou webové služby odstranit noční můru kartografů?. 16. kartografická konference (mapa v informační společnosti)., univerzita obrany, 2005, s. 1-10. isbn 80-72-310-151 (2005) geoinformatics fce ctu 2011 92 measuring repeatability of the focus-variable lenses jan řezníček laboratory of photogrammetry, department of geomatics, czech technical university in prague thákurova 7, 166 29 prague 6, czech republic reznicek33@centrum.cz abstract in the field of photogrammetry, the optical system, usually represented by the glass lens, is used for metric purposes. therefore, the aberration characteristics of such a lens, inducing deviations from projective imaging, has to be well known. however, the most important property of the metric lens is the stability of its glass and mechanical elements, ensuring long-term reliability of the measured parameters. in case of a focus-variable lens, the repeatability of the lens setup is important as well. lenses with a fixed focal length are usually considered as “fixed” though, in fact, most of them contain one or more movable glass elements, providing the focusing function. in cases where the lens is not equipped with fixing screws, the repeatability of the calibration parameters should be known. this paper derives simple mathematical formulas that can be used for measuring the repeatability of the focus-variable lenses, and gives a demonstrative example of such measuring. the given procedure has the advantage that only demanded parameters are estimated, hence, no unwanted correlations with the additional parameters exist. the test arrangement enables us to measure each demanded magnification of the optical system, which is important in close-range photogrammetry. keywords: photogrammetry, camera, calibration, stability, repeatability, focus-variable lens 1. introduction in the field of photogrammetry, the optical system, usually represented by the glass lens, is used for metric purposes. therefore, the aberration characteristics of such a lens, inducing deviations from projective imaging, has to be well known. however, the most important property of the metric lens is the stability of its glass and mechanical elements, ensuring long-term reliability of the measured parameters. in the case of a focus-variable lens, the repeatability of the lens setup is important as well. lenses with a fixed focal length are usually considered as “fixed” though, in fact, most of them contain one or more movable glass elements, providing the focusing function. in cases where the lens is not equipped with fixing screws, the repeatability of the calibration parameters should be known. 1.1. related work several papers have been published in recent years addressing the issue of the geometric stability of the camera calibration parameters. shortis et al. (1997) describe the magnitude of instability of the calibration parameters of the kodak dcs420 and 460 cameras when used geoinformatics fce ctu 13, 2014, doi:10.14311/gi.13.1 9 http://orcid.org/0000-0001-6279-1926 http://dx.doi.org/10.14311/gi.13.1 řezníček. j: measuring repeatability of the focus-variable lenses as a metric system. läbe and förstner (2004) investigated the use of consumer grade cameras for photogrammetric applications in terms of their stability in time and usable functions, such as the zoom and auto focus. shortis et al. (2006) compares the stability of the zoom vs. fixed focal lenses where the zoom lenses have been fixed with a piece of tape. rieke-zapp et al. (2009) evaluated the geometric stability and the accuracy potential of several fixed focal lenses, where some of them were fixed even in the focus mechanism. sanz-ablanedo et al. (2010) performed a comparison of the geometric stability of the interior orientation parameters (iop) of six identical compact digital cameras. the measurement was performed in three model situations: during continuous use of the cameras, after the camera was powered off/on and after the full extension and retraction of the zoom-lens. all of the above mentioned works performed the measurement by computing the iop, using the method of multi-pose network analytical calibration. the influence of different magnifications of each target on the calibration parameters is not covered and neither is the detailed correlation analysis between estimated parameters, which would give a more realistic view on the given results. 1.2. motivation we have chosen to study the influence of the focusing mechanism play on the repeatability of the iop of the camera, which is not covered in the previous works. in order to prevent the influence of the possible high correlations among iop end eop (exterior orientation parameters) we have decided to use a different, more rigorous, approach. it is also more suitable for analyzing additional systematic effects, as the method does not require an a priori given mathematical model describing camera’s internal geometry. the high correlations among parameters could cause an unreliable estimate. for example, the small change of the principal distance or the principal point can be reduced by a small change of the camera pose. 1.3. interior orientation parameters interior orientation parameters can be defined as “all characteristics that affect the geometry of the photograph” (slama et al. 1980, p. 244). the most important iop are given by: format (sensor) dimensions, principal distance, principal point position and lens distortion characteristics. different types of cameras demand different characteristics. for example, the airborne cameras are usually assembled from much more components than the closerange cameras and therefore do requires much more parameters for the characterization of its interior geometry. thus, depending on the camera type and demanding accuracy, additional characteristics could be needed: fiducials, axis scale, skew, reseau coordinates, point spread function (psf), sensor unflatness characteristics, sensor noise characteristics, forward motion compensation characteristics, etc. in this paper, we focus only on the deviation in the location of the projection center (principal distance and the principal point position). 2. procedure and test arrangement the measured lens is mounted to a digital camera body which has a fixed pose. the camera (we will use term camera for camera body with mounted lens) is directed toward the planar test field in such a way that the lens optical axis is perpendicular to the test field. the test field consists of several hundred (280) black circular dots (targets) printed on a white sheet geoinformatics fce ctu 13, 2014 10 řezníček. j: measuring repeatability of the focus-variable lenses of paper in a regular interval. with such an arrangement, a pair of images is taken. before the second shot, the lens is refocused either manually or by remote control. in the case of the remote control function, the contrast target has to be moving in front of the lens in order to drive the automatic focus function. refocusing moves the optical elements inside the lens assembly, which cause the small deviations in repeatability of the focus setup. however, the position of the camera body should not be changed during image acquisition. finally, the image coordinates of the target images are detected and referenced on both images. figure 1: test arrangement 3. theoretical development in this section, we will describe the geometry of the situation and develop mathematical relations for it. the geometry of two marginal rays, corresponding to acquired pair of images, passing from the target (d) in object space to the sensor in the image space (e,f) is shown in figure 2. each ray is passing through the different projection center (a,b) of the same camera. the difference in the location of the projection center (denoted xb, yb, zb) simulates the error given by the refocusing of the lens. the goal is to express those differences in mathematic relations which could be enumerated by using images acquired by the procedure given above. two coordinate systems are used here – local (denoted with superscript .loc) and global. the origin of the global coordinate system lies in the projection center a. the x-axis is vertical and directed towards the zenith, the z-axis lies in the optical axis of the camera and is directed towards the sensor, finally the y-axis is defined by the right-hand rule. the definition of the local coordinate system is same with one exception: the x and y component of the origin is shifted by values xr, yr, where r is a lower-right corner of the sensor (in the negative orientation). this definition allows us to read the image coordinates directly in the local coordinate system, because the pixels are organized in a matrix system (origin in the upper left corner) and because the orientation of the recorded image is positive (however, the figure 2 shows the real negative orientation of the sensor). notation aspects: the homogeneous (also called projective) system of coordinates will be used instead of cartesian system of coordinates. image point coordinates can be then defined in the same global coordinate system as the object point coordinates, which is unusual, but very geoinformatics fce ctu 13, 2014 11 řezníček. j: measuring repeatability of the focus-variable lenses figure 2: the geometry of two marginal rays practical. the z coordinate of every image point is therefore equal to the principal distance of ca. the projection of the object point d to the image points e and f can be written as eloc = ta·d (1) f loc = tb·d (2) where d =   xd yd zd 1   , eloc = zd ·   xloce yloce ca 1   , f loc = (zd − zb) ·   xlocf ylocf ca 1   (3) and ta = m2m1 =   ca 0 0 zd · xlock 0 ca 0 zd · ylock 0 0 ca 0 0 0 0 zd   . m1 represents the projection and m2 represents the transformation from global to local system m1 =   ca 0 0 0 0 ca 0 0 0 0 ca 0 0 0 0 zd   , m2 =   1 0 0 xlock 0 1 0 ylock 0 0 1 0 0 0 0 1   . therefore xloce = ca zd xd + xlock , y loc e = ca zd yd + ylock . geoinformatics fce ctu 13, 2014 12 řezníček. j: measuring repeatability of the focus-variable lenses tb consists of a three matrices tb = n3n2n1 =   cb 0 0 xlocl (zd − zb) − cbxb 0 cb 0 ylocl (zd − zb) − cbyb 0 0 ca −cazb 0 0 0 zd − zb   where n1 represents the translation of the point d before projection, n2 represents the projection and n3 represents the transformation from global to local system n1 =   1 0 0 −xb 0 1 0 −yb 0 0 1 −zb 0 0 0 1  , n2 =   cb 0 0 0 0 cb 0 0 0 0 ca 0 0 0 0 zd − zb  , n3 =   1 0 0 xlocl 0 1 0 ylocl 0 0 1 0 0 0 0 1   . by combining equations (1) and (2), we get eloc = ta · d = ta · t −1b · f loc = t · f loc (4) where t = ta · t −1b =   ca cb 0 0 zd ·x loc k (zd −zb ) − ca cb (xloc l (zd −zb )−cbxb ) (zd −zb ) 0 ca cb 0 zd ·y loc k (zd −zb ) − ca cb (yloc l (zd −zb )−cbyb ) (zd −zb ) 0 0 1 ca zb(zd −zb ) 0 0 0 zd(zd −zb )   . (5) therefore, according to (3), (4) and (5) zd ·   xloce yloce ca 1   = t · (zd − zb) ·   xlocf ylocf ca 1   which leads to   xloce yloce ca 1  =   ca cb (zd −zb ) zd 0 0 xlock − ca cb (xloc l (zd −zb )−cbxb ) zd 0 ca cb (zd −zb ) zd 0 ylock − ca cb (yloc l (zd −zb )−cbyb ) zd 0 0 (zd −zb ) zd ca zb zd 0 0 0 1     xlocf ylocf ca 1   . for the third component ca we can write ca = ca (zd − zb) zd + ca zb zd which is after simplification zb zd = zb zd . geoinformatics fce ctu 13, 2014 13 řezníček. j: measuring repeatability of the focus-variable lenses therefore, only the first and second components constitute a system of independent equations which can be solved. those two equations are given by xloce = λ·x loc f + tx yloce = λ·y loc f + ty (6) where λ = ca (zd − zb) zd (ca − zb) tx = xlock − ca ( xlock + xb ) (zd − zb) zd (ca − zb) + caxb zd ty = ylock − ca ( ylock + yb ) (zd − zb) zd (ca − zb) + cayb zd and cb = ca − zb xlocl = x loc k + xb ylocl = y loc k + yb. the system of equations given in (6) contain four measurements: xloce , y loc e , x loc f , y loc f – representing the image coordinates of the target corresponding to image a and b, three unknown parameters: xb, yb, zb – representing the deviation of the projection center, and four known constants: ca, xlock , y loc k , zd. because the test field consists of a several hundreds of targets, the system of non-linear equations is over-determined and needs to be solved by a proper estimator. one pair of images (a and b) gives one sample estimation of the projection center deviation. at least several tens of pairs have to be taken in order to get a reliable estimate and standard deviation of the projection center deviation x̄b = ∑n i=1 ∣∣xib∣∣ n sxb = √∑n i=1 v 2 i n − 1 , where vi = ∣∣∣xib∣∣∣ − x̄b and where n is the number of image pairs (same applies for yb and zb). 4. demonstrative example the theoretical development, given in the previous section, is used in a following test. we have measured the repeatability of the focus mechanism of the canon ef 40 mm f2.8 stm lens, mounted on a canon 5d mark ii digital camera body (35 mm full-frame sensor). the whole procedure is described above. the test field consisted of 280 black circular dots (8 mm in diameter) and was placed in distance of 1230 mm from the camera sensor. before each shot, the lens was refocused by using a remote control function. the focusing mode was geoinformatics fce ctu 13, 2014 14 řezníček. j: measuring repeatability of the focus-variable lenses set to one-shot af mode. the target images were detected by using an ellipse fitting type of detector, using only the green channel in order to suppress the error given by chromatic aberration. the over-determined system of non-linear equations given in (6) has been solved by an estimator presented in mikhail et al. (1976) and called “adjustment with conditions only – general case”. the program was written by the author of this paper in matlab language. 4.1. initial stability test the first set of images (101 = 100 pairs) have been taken with no refocusing or any other manipulation in order to confirm the stability of the test field and camera and to confirm the randomness of all processes (e.g. image pre-processing, target detector). the results are given in table 1 and figure 3. table 1: results based on n = 100 estimated values parameter nominal estimated standard [µm]value [mm] value [mm] deviation x̄b 40.0 39.999996 sxb 0.097 ȳb 12.0128 12.01280 syb 0.11 z̄b 18.0288 18.028820 szb 0.067 conclusion: as can be seen from the table 1, there is no significant change in the position of the projection center, which proves that no systematic error is present. the randomness is visible also in the figure 3. 4.2. focus repeatability test for the main focus repeatability test, 101 images have been taken in overall (this makes 100 pairs). the results of this test are given in table 2 and figure 4. table 2: results based on n = 100 estimated values parameter nominal estimated standard [µm]value [mm] value [mm] deviation x̄b 40.0 40.000 sxb 26.0 ȳb 12.0128 12.0128 syb 3.7 z̄b 18.0288 18.0288 szb 2.4 conclusion: the results given in table 2 show significant deviation of the projection center position induced by the refocusing operation. figure 4, displaying the vector residuals (given by solving the system of equations) of two consequence images 94 and 95, shows no systematic pattern (same applies for all other image pairs also). this means that, for this particular case, no change of the lens distortion was measured (within the bounds of the measurement accuracy). geoinformatics fce ctu 13, 2014 15 řezníček. j: measuring repeatability of the focus-variable lenses figure 3: the vector differences between image coordinates of corresponding targets of two consequence images 94 and 95 (this pair was chosen randomly) figure 4: the residual vectors between image coordinates of corresponding targets of two consequence images 94 and 95 (this pair was chosen randomly) geoinformatics fce ctu 13, 2014 16 řezníček. j: measuring repeatability of the focus-variable lenses 5. conclusion we have derived simple mathematical formulas, which can be used for measuring the repeatability of the focus-variable lenses. the procedure of such measuring has the advantage that only demanded parameters are estimated, hence, no unwanted correlations with the additional parameters exist. the test arrangement enables the measuring under each demanded magnification of the optical system, which is important in close-range photogrammetry. the demonstrative example showed the error in repeatability, which can be modeled with a simple linear model. however, a more complicated, non-linear error progression can be modeled as well, without the need of a priori known model. because the measured lens was not calibrated at the time of the test, we used nominal values of the principal distance and principal point position instead of the calibrated ones. the differences in the resulting values are so small, that, for the purpose of this test, it can be neglected. the results (standard deviation of the parameters) given by measuring repeatability have to be considered in planning photogrammetric project accuracy, as the corresponding estimates from a single camera calibration are not realistic (in long-term sense). acknowledgement this work was supported by the czech ministry of culture, under grant naki, no. df13p01ovv002 (new modern non-invasive methods of cultural heritage objects exploration). references [1] läbe t, förstner w. geometric stability of low-cost digital consumer cameras. in: proceedings of the xx isprs congress, commission i: 2004 jul 12–23; istanbul, turkey. iaprs 2004;35(pt b1):528–34. [2] mikhail em, ackermann f. observations and least squares. new york: iep–a dundonnelley; 1976 [3] rieke-zapp d, tecklenburg w, peipe j, hastedt h, haig c. evaluation of the geometric stability and the accuracy potential of digital cameras – comparing mechanical stabilisation versus parameterisation. isprs journal of photogrammetry and remote sensing 2009;64:248–58. [4] sanz-ablanedo e, rodríguez-pérez jr, armesto j, taboada mfa. geometric stability and lens decentering in compact digital cameras. sensors 2010;10:1553–72. [5] shortis mr, beyer ha. calibration stability of the kodak dcs420 and 460 cameras. in: proceedings of the spie 3174, videometrics v, 94 (july 7, 1997). doi:10.3390/s100301553 [6] shortis mr, bellman cj, robson s, johnston gj, johnson gw. stability of zoom and fixed lenses used with digital slr cameras. in: proceedings of the isprs commission v symposium “image engineering and vision metrology”: 2006 sep 25–27; dresden, germany. iaprs 2006;36(pt 5):285–90. [7] slama chc, theurer ch, henriksen sw. manual of photogrammetry. virginia, usa: american society of photogrammetry, falls church; 1980 geoinformatics fce ctu 13, 2014 17 http://dx.doi.org/10.3390/s100301553 geoinformatics fce ctu 13, 2014 18 ________________________________________________________________________________ geoinformatics ctu fce 2011 55 experiences in photogrammetric and laser scanner surveing of architectural heritage mauro caprioli1, maurizio minchilli2, alfredo scognamiglio1 1 department of “vie e trasporti”, polytechnic of bari, v. orabona, 4 70125 bari italy m.caprioli@poliba.it 2 department of architecture, design and urbanism, university of sassari, minchilli@uniss.it keywords : photogrammetry, laser scanning abstract: the integration of different methodologies and simplification of procedures is surely one of the most important current themes in survey techniques for researching cultural heritage, especially for architectural heritage. the traditional methods for the metric survey of an architectural building (including micro geodesy and the stereophotogrammetric survey) are currently being consolidated. however, in the field of the architectural, artistic and archaeological survey, new technologies are now available and this knowledge needs to be more widely used, especially where the heritage is greater and so maintenance is as necessary as it is onerous. this report proposes some operative considerations, which derive from experience developed using the most modern technologies available, in order to compare and discuss the achievable accuracies (even if they are not always strictly necessary) and the operating modalities, especially for architectural surveys. so, the aim of this report is to verify that the simplification of procedures and cheaper solutions do not bring variations to the necessary metric precision or expert photointerpretation analysis. however, it is clear that, without considering the purposes and objectives of the survey, it would not be correct to appraise the survey methods and to classify those using reliability and accuracy criteria. in fact, the objective of a survey determines which tool will be used, according to the requests of the survey client to the survey expert.. 1. digital photogrammetry since the 1980s terrestrial photogrammetry methodology has been meaningfully changed in terms of methodology, performance and operation. these years have seen first the passage from analogical resolution tools (which were very widely used in italy at that time) to analytical tools and then the passage to hybrid trials (analogical resumptions digital restitution) and finally to entirely digital technologies. new technologies and techniques were gradually developed for data acquisition (photo grammatical scanner and metric digital cameras), for the elaboration of numerical data (homologous image correlation), for representation (2d and 3d cad, raster/vector file hybrid treatment, simulation, animation and virtual reality) and for the treatment of information and spatial analyses (georeferential informative systems in the architectural and archaeological fields). the improvement in survey methods for monuments and historical places have provided an important contribution to describing and monitoring cultural heritage, for the maintenance and restoration of monuments, objects and architectural places as well as helping historical, architectural and archaeological research. however, photogrammetry is still used to obtain geometric information such as position, measurements and the shape of objects captured by photographic images, without measuring and analyzing them directly. currently, the optical system (for coordinate restitution of a point in three-dimensional space by the minimal intersection between two optic rays that connect a point on the object with the center of the camera objective and with the image point) guarantees great freedom of action, just working directly on digital images. furthermore, the presence of more than 3 projected rays of this kind (if objects appear on three or more photos) makes it possible to seek a general solution for the identification of survey point coordinates (bundle solution) by using all available measurements together (photographic and other). when the object is on a plane it is very common to use single-frame shots because of the quick elaboration for high-resolution and large dimension images. in addition, digital photo-adaptation techniques, expressed as geometric transformations of images and projections on spatial models are frequently used. this is the case when the surface of an object in space (digital surface model dsm) is already known, so it is important to know details of the representation (weaving, models, lesions, material analysis etc.). knowledge of the camera orientation parameters greatly helps the geometric construction of a reliable model and, in recent years, laboratory autosetting-calibration methodologies have been made available to the final user. in any case, external-orientation reconstruction requires 3d mailto:minchilli@uniss.it ________________________________________________________________________________ geoinformatics ctu fce 2011 56 point measurements, obtained by traditional techniques. when the measurements of an object or its geometry are not required, at least two photographic images of it can be useful in order to exploit the stereoscopic-vision characteristics given by two images of the same object, observed simultaneously in an independent way with two eyes. the classical-photogrammetrical restitution products are vectorial drawings of an object and, according to classical architectural drawing conventions, this describes plans, prospectuses and sections, but also among others three dimensional wireframe models, surfaces models (dsm), coordinated points lists. photogrammetrical restitution instruments can be equipped with attributes in order to obtain more rapidly updated information about the object, for example material, state of damage or any other characteristic that is useful in order to amplify the knowledge of the architectural object. building-volume informative systems need three-dimensional gis software. these developing procedures should allow topological and spatial analyses underlining mutual interactions between objects in threedimensional space. 1.1 “bundle restitution” in many cases using stereo-images alone is not enough for the reconstruction of objects with complex forms such as building volume, therefore a larger number of photos are used to totally reconstruct such objects. in these cases, the simultaneous resolution of the orientation of all the photos is recommended to obtain homogeneous solutions (bundle block adjustment). the terrestrial applications and development of the star-triangulation-projection software began in the 1980s with bingo industrial applications. this kind of software is commercially available and has simple interfaces. effecting a photographic camera calibration for each single survey (on-the-job) is another advantage because this method helps to increase the survey accuracy and is used with images taken using non-metric photographic cameras. in the case of multiple captures and in the case of surveys with multiple support-points, the photogrammetrical survey technique is more accessible and flexible because it uses uncalibrated photographic cameras. the photographs can be realized with different capture-angles, without using parallel sights (normal case) and from pre-set distances, as used in classic photogrammetry. the possibility to use horizontally and vertically convergent oblique photos, and using different photographic cameras and different lenses, allows a large choice of photographic shooting points. the photographic acquisition method has to satisfy the necessity that each point is intersected by at least two optical rays with a satisfactory angle, according to the desired metric accuracy. in this method it is possible to use additional knowledge such as parallelism, surface flatness and angle orthogonality in order to reconstruct a more correct three-dimensional geometric model of the object. all the parameter measurements and calculations are realized using the ordinary least squares statistical compensation method and, thanks to system redundancy, it is possible to locate coarse errors and to improve the accuracy and reliability of results. this method gives very appreciable and accurate results, useful in cad software in which it is possible to manipulate the obtained geometric data. as already explained, the results obtained by this procedure are usually 3d wireframe models or digital surfaces models of the object (dsm) or they can also be entities and points that are classified by assigned their attributes. the graphic products are 3d digital models and they can also be completed with the surface “texture”, obtained directly from the photographs. 1.2 image acquisition systems before the availability of totally digital processes, using expensive and very specialized equipment (such as photographic metric cameras) limited the applications of architectural photogrammetry, because it was necessary to know the inside orientation parameters of the cameras. these instruments were expensive and so not widely used because it was necessary to use large format cameras (in order to read the images in a better way) and lenses were built and calibrated to have very small radial distortion. the important development and commercial availability of digital image acquisition systems has allowed lower prices and a widened market. the main advantage of these systems is the possibility to directly acquire digital images, that can then be processed in a numerical programming environment, without passing through film development, the possible photograph or acquisition by a photogrammetrical scanner (because they all introduced big or small geometric survey alterations). the digital process already allows greater productivity, related to the intrinsic automation possibilities of electronic computation and it also allows the contemporary use of vector-data and "raster" data in a single working session. because digital aerial photogrammetric cameras are cheaper, the global digitization process has been quite fast in architectural photogrammetry: sometimes specialized digital metric cameras are produced and high resolution semi-professional cameras are used more. in summary the advantages are: unique data-flow with the possibility of on-line workmanship considerable automation possibilities good geometric characteristics independence from the film development process quality control of the acquired images inexpensiveness of the system components ________________________________________________________________________________ geoinformatics ctu fce 2011 57 which kind of photographic camera it is better to use is not an easy question to answer because different factors can induce one choice rather than another one. often, a photogrammetrical survey can be complex (sometimes the subject of the survey itself is complex), the available budget can vary, the time available for operations can be restricted because of external factors such as emergencies caused by catastrophic events. thus, these factors influence the choice of the system to use and also adding variables such as the need for color, scale representation and necessary resolution, it is clear that the previous question has many possible answers. in conclusion, it is important to know all the factors and survey "objectives" in order to choose the best acquisition system. some important considerations, derived from classical methodological experience, are as valid as actual technologies: using large format sensors, according to economic availability; using fixed focal lenses for better resolution and more distortion containment; using lenses with manual focus setting, that are pre-set on calibration; choosing a metal support structure in order to have better mechanical stability than using auto-calibration orientations; using a camera stand and a low sensitivity sensor (less noise in the digital image) 1.3 systems and methods for architectural photogrammetry most of commercial digital photogrammetry workstations (dpw) are set out primarily for stereoscopic image restitution, aerial triangulation, extraction of digital terrain models (dtm) and production of ortho-photographs from aerial-terrestrial stereo images. in fact, cheaper systems are sufficient to reconstruct a three-dimensional model of the architectural object, but they do not assure metric precision accuracy, in which acquisition and elaboration tools have to be of high quality. these systems can be used by photogrammetry experts and also by other professional figures who are interested in documenting maintaining and restoring. architectural heritage. software improves day by day and three-dimensional object models are easier and more accurate to realize. examples of current software and systems are introduced here. to compare different systems, it is important to focus on the following problems: ease of use of the system, data flow, project management, data importation and exportation (image formats, internal and external orientation parameters, etc.), internal orientation, external orientation (in one or two steps only), object reconstruction, consistent, accurate and reliable topological results, the necessary photogrammetrical knowledge to manage the system. 1.4 monoscopic systems of restitution and multiple images monoscopic restitution systems are designed in order to obtain metric restitution of objects from multiple images acquired by different methods and by different shooting angles and positions. to obtain metric documentation, these systems can also consider camera internal orientation parameters and objectglass or scanner distortions. these systems are used with multiple images in order to obtain the significant point coordinates of an investigated object by using the images available. from these points it is possible to obtain the dimensions that characterise the object, the metric documentation and a three-dimensional representation, ortho-photos, photo-plans and three-dimensional photo-realistic models are typical products. the photomodeler software is an example of these systems (an application is described below). it works with the following methodology: camera calibration with a specific internal module, acquisition of two or more photographic images from different angles, homologous point identification on different images, 3d model geometry tracking (points, lines, curves, etc.), data processing for 3d model reconstruction, vectorial or photo-realistic 3d model visualization, re-projected on a plane (ortho-photo), extraction of coordinates and shapes, export of various formats for animated rendering or otherwise. figure 1: photomodeler by eos sys. ________________________________________________________________________________ geoinformatics ctu fce 2011 58 1.5 stereoscopic restitution system stereoscopic photogrammetrical restitution systems have been in use for long time. however the transition from analogical to digital procedures allowed the automation and the simplification of procedures and this is good news. the transition from analogical to digital photogrammetry permits a reduction in costs and difficulties. final users ask for three-dimensional models of the architectural objects with some metric and semantic information. there is continuous development of easier and cheaper solutions as well as automatic procedures. stereoscopic restitution systems are based on: stereo pairs acquisition of objects by cameras (note: camera interior-orientation parameters must be known and must be obtained by appropriate calibration); digital stereo pairs introduction in the stereo vision system by a special polarization monitor, in order to obtain stereoscopic vision; external orientation resolution in two phases (relative and then absolute orientation); three-dimensional geometry restitution by the graphic entities plan, through the overlap of pictures and, eventually, through feature discrimination using layers and codes. the most interesting features are connected with automated processes and final realizable products. the extraction of point coordinates on an object surface is possible using automatic image correlation and provides a digital model (dsm), usually in a grid format, with a variable level of detail, according to the representation needs and computation time that increases with increasing grid resolution. if the surface information on the survey object is already available, in this kind of software it is possible to re-project the digital images in order to obtain high accuracy ortho-photos or photo-plans in an easy way. 2. survey with 3d laser scanner the spread of laser scanning is relatively recent and is certainly one of the technologies which will show expansion in the near future. the major expectations in the cultural heritage field concern terrestrial equipment. there are several kinds of instruments with different accuracy and load capacity (from mm to km). a measurement session with a laser scanner gives a set of three-dimensional coordinates, usually in a reference system associated with the instrument. these coordinates represent the very high number of points hit by the laser. therefore the point cloud so generated describes external surface of the scanned object. laser systems are almost completely automated and they are able to acquire a remarkable number of points per second, sometimes thousands. a laser scan product consists of one or more “point clouds " with high density, thousands or hundreds of thousands of points, which describe object surface and its surrounding environment in great detail (many tools can scan 360 degrees horizontally with high vertical aperture). however, this data is not a stand-alone product like classic graphical representations derived from direct or photogrammetrical surveys, in which the picture is filtered by the operator who selects and codes interesting elements. so software is the key to successful laser scanning but it is often characterized by many unknowns. in fact there are currently some good products but many lack software, especially concerning the architectural heritage field. there are about a dozen main packages currently used, including an italian product. these are typically stand-alone software products but some are integrated into cad software as add-on packages. these products integrate some tasks that are not normally included in cad or three-dimensional data (dtm) management software. some objects such as sculptures could require a full-3d wire-frame system (with its computational power requirements) rather than a regular planimetric approach (known as a 2.5d approach used for terrain modeling). the stage of "mesh" generation is one of the most delicate and results provided by different programs on the same object can be quite different. furthermore, in the cultural heritage application field, the techniques of “photorealistic texture mapping" have great importance. texturing on the 3d model surface, reconstructed from metric laser scanning, is also helpful in the study and interpretation of an object, because we must remember that in many cases the photographic image contains very valuable information about conservation status such as aspect, colors, water traces, etc. the use of high-accuracy orthophotos (true orthophoto), generated from many different pictures, is another interesting development. this allows both full photo coverage of an object and knowledge of shadows or hidden parts, with the possibility of deleting any elements disturbing the final product. we can make some observations about the use of laser scanning in common architectural heritage survey contexts: scanner choice depends on the object size, acquisition distance and required precision; technology provides high speed automatic acquisition, and does not need a related topographic survey if using a local reference system. full object description requires more acquisitions from different stations, to identify any gaps caused by shape (especially for all around objects, e.g. sculptures) and therefore a survey should be carefully planned; the use of a high-reflectance target (flat or spherical) is very common, allowing unification of several scans; logistical problems (transportation, instrument weight, power supply, needs to work at night or in low light with triangulation instruments or phase measurement) can be very stringent in certain circumstances. ________________________________________________________________________________ geoinformatics ctu fce 2011 59 finally, the cost of tools and software packages is still high, and it is hard for a user to buy two or three different types of tool to use them effectively in every possible survey in the cultural heritage field. 3. experiences over the years, some surveys and several experimental tests have been conducted in different localities of puglia and basilicata.the first concerns the church of st. catherine, situated in conversano (bari), which includes many features useful in our studies. the church: has a simple four-sided plan and it is interesting mainly for the presence of large curved surfaces that are more difficult to detect; has a quite limited size; and is a free-standing building, which allows a complete three-dimensional survey. the measurements were made with three laser systems and the results are reported below. 3.1 surveys riegl lms z210 laser scanner model captures data with a 360° horizontal angle, ř0° vertical angle, minimum grid scan equal to 2.5 mm and a minimum 80 mgon angle. cyrax 2500 laser scanner captures data with a 40° horizontal and vertical angle , but unlike the previous one, with a minimum grid scan equal to 4 mm, and in this case, the tool works with time of flight method. the gs 100 3d laser scanner made by the french company mensi is a tool scanning up to 100 meters with a 360° horizontal angle (the same as the riegl) and a 60° vertical angle. figure 2: point clouds with different instruments these points, detected in abundant quantities, efficiently and completely describe the surface of our church, however, the abundance of data generates a certain difficulty in identifying significant points of detected architecture, just as we are accustomed to recognizing in classical architecture representations realized with vectors that conventionally describe object such as edges, slits, overhangs, etc. further experience and 3d laser scanning instruments allowed the development and refinement of the most effective methodologies for using this survey technique. the following pictures were shot with a leica hds laser in the unesco world heritage site of "sassi" in matera. ________________________________________________________________________________ geoinformatics ctu fce 2011 60 figure 3 "convicino s. antonio "matera the next image was obtained using topcon‟s new generation total station. its motorized system allows acquisition of a large number of coordinates. this tool, less expensive than laser scanners, in some cases can usefully replace 3d laser scanning, and obtain an already reduced point cloud integrated with the traditional topographic survey. figure 4: stone bridge "san giorgio" bari-topcon is (imaging station) and 3d model ________________________________________________________________________________ geoinformatics ctu fce 2011 61 the next figure shows the results of the survey of the cave church of st. vigilia in a gis environment and then extracting sections, elevations and floor plans. figure 5: point cloud of s.vigilia cave church, fasano (ba) in gis environment and section extraction 5. conclusions experience since 1980 in the field of architectural photogrammetry, lead us to make some comments about the use of photogrammetical systems or laser scanners. comprehension and execution of the photogrammetrical approach was, and still is, quite difficult for a new user. photographic equipment for image acquisition requires a geometrical study and radiometric calibration which are hard to automate. previous considerations cannot be overlooked, although current methods allow many more placement patterns and refined automation of camera exposure. laser scanning computer processes, with high-speed data acquisition, have made the terrestrial photogrammetrical technique falsely obsolete and "unfashionable". the great demand for laser acquisition of historical buildings, monuments and objects for documentary purposes, underlines the previous observations. but as usually happens, increasing demand is not followed by training the purchaser in the requirements and goals for a correct survey. despite the great potential of these tools, they are often only used to produce plans, elevations and sections (plotted to scale!). we believe that photogrammetry in the near future may still give a significant contribution especially to architects (and archaeologists), who require, as specialists, to be able to interpret the object of their interest and not just measure it. 8. references [1] dequal, s., et a, “the solid image: an easy and complete way to describe 3d objects. international archives of the photogrammetry, remote sensing and spatial information sciences, xx isprs, istanbul. 2004 [2] galetto, r, camere digitali per le riprese aeree e terrestri, bollettino sifet n. 4 2004. [3] caprioli, m., scognamiglio a. “low coast methodology for 3d modelling and metric description in architectural heritage”, iaprs vol xxxviii 2009 špecifické aspekty spracovania geodet. sietí použitím prog. sonet špecifické aspekty spracovania geodetických siet́ı použit́ım programu sonet marián kováč, ján hefty department theoretical geodesy faculty of civil engineering, slovak university of technology e-mail: marian.kovac@stuba.sk, jan.hefty@stuba.sk kl’́učové slová: geodetické siete, vyrovnanie abstrakt článok je zameraný na popis programu na spracovanie geodetických siet́ı s analyticky definovaným matematickým modelom (observačné rovnice definované v programe v symbolickom tvare), čo je najvýrazneǰsia črta, ktorou sa program odlǐsuje od súčasných programov na spracovanie geodetických siet́ı. univerzálnost’ programu demonštrujeme na pŕıklade spracovania viacepochovej geodetickej siete. spracovanie bolo uskutočnené vo forme pŕıpadových štúdíı od elementárnej kombinácie merańı gps s uvážeńım kovariančných mat́ıc, až po spoločné spracovanie terestrických observáćı a gps s uvážeńım časových zmien a transformačných parametrov v spoločnom matematickom modeli. úvod jednou zo základných úloh geodézie je budovanie geodetických siet́ı. geodetické siete tvoria množinu geodetických bodov, ktoré sú účelne rozložené na zemskom povrchu. tvoria základ pre štúdium tvaru, rozmerov a tiažového pol’a zeme a sú aj podkladom pre všetky druhy technických a meračských prác. význam a úloha geodetických siet́ı sa s rozvojom geodézie meńı a upravuje. klasický pŕıstup k budovaniu, resp. spracovaniu geodetických siet́ı sa zameriava na oddelené spracovanie polohových, výškových a tiažových merańı a označuje sa ako dvojrozmerná geodézia. s rozvojom družicových metód, ich dostupnost’ou a presnost’ou nastáva v geodézii problém, ako tieto merania čo najlepšie využit’ a nestratit’ informáciu o trojrozmernej polohe bodov. takisto nastáva problém, ako tieto merania čo najlepšie spojit’ s terestrickými a gravimetrickými meraniami. vzniká potreba zjednotit’ dostupné merania v spoločnom matematickom modeli. pojem štvorrozmernej geodézie sa použ́ıva pre tie geodetické teórie, metódy spracovania a interpretácie, ktoré sa venujú určovaniu priestorovej polohy bodov súčasne s opisom ich zmien v čase. softvérová aplikácia motivácia. k vytvoreniu softvéru s analyticky definovaným matematickým jadrom nás viedli nasledovné zistenia: geinformatics fce ctu 2006 149 špecifické aspekty spracovania geodet. sietí použitím prog. sonet � v geodézi sa v súčasnosti spracovávajú rôzne typy geodetických siet́ı (terestrické, gravimetrické, gps, ...) spravidla tak, že na každý typ geodetickej siete, resp. na ich určitú skupinu je potrebný iný program. � matematický model na vyrovnanie geodetických siet́ı (ak predpokladáme vyrovnanie sprostredkujúcich merańı) je v prinćıpe založený na poznańı vzt’ahu medzi meranými veličinami a neznámymi, ktoré sú viazané funkčným vzt’ahom nazývaným observačná rovnica. prezentovaný program je navrhnutý ako modulárny systém, kde základnú aplikáciu je možné rozš́ırit’ o (a) zásuvné moduly a (b) skripty (v jazyku python). vstupný údajový formát programu je v jazyku xml [4]. tento vstupný súbor zahŕňa (a) čast’, v ktorej je poṕısaný matematický model siete, (b) čast’ obsahujúcu samotné observácie. geodetické observácie v programe je možné spracovávat’ nasledovné geodetické observácie (v zátvorke je uvedený pŕıslušný xml element): geocentrické karteziánske súradnice (coordinate), resp. ich parametrický vektor, zmeny priestorovej polohy (velocity), horizontálny uhol (angle), zenitový uhol (z-angle), priestorová vzdialenost’ (distance), prevýšenie (diffh). vo vstupnom súbore sa napr. vodorovný uhol zaṕı̌se v tvare (zo stanoviska a): program umožňuje spracovávat’ viacepochové geodetické siete; jednotlivé epochy sa v programe označujú pojmom unit. každý unit zapúzdruje observácie združené v časti označenej pojmom block. každný block obsahuje okrem observácíı aj im prislúchajúcu kovariančnú maticu. pŕıklad unitu s jedným blockom, jednou d́lžkou a prislúchajúcou kovariančnou maticou: matematický model matematický model je v programe definovaný analyticky. pri jeho zostavovańı je potrebné uviest’ (a) observácie (observations), ktoré chceme spracovat’, (b) neznáme odhadované parametre (unknowns) v symbolickom tvare (vo forme textového ret’azca) a (c) observačné rovnice (equations) v symbolickom tvare, ktoré viažu observácie s definovanými neznámymi parametrami. geinformatics fce ctu 2006 150 špecifické aspekty spracovania geodet. sietí použitím prog. sonet výber neznámych parametrov pomocou neznámych parametrov je možné v programe definovat’ neznáme, ktoré chceme z vyrovnania źıskat’. neznáme parametre sa definujú v elemente unknown. každý element unknown obsahuje element group, ktorý združuje elementy point, pomocou ktorých je definované, ku ktorým bodom je definovaná neznáma vztiahnutá. ak pŕıslušná skupina (element group) má definovaný atribút name, označuje sa ako pomenovaná skupina, ak ho definovaný nemá, označuje sa pŕıslušná skupina ako anonymná. význam anonymnej skupiny je v tom, že pre každý bod definovaný v tejto skupine sa vytvoŕı samostatná neznáma; pŕıkladom anonymnej skupiny môže byt’ napr. definovanie odhadovaných súradńıc, ai. naopak, pri pomenovanej skupine sa vytvoŕı jedna neznáma viazaná ku všetkým bodom obsiahnutým v pomenovanej skupine; pŕıkladom pomenovanej skupiny môže byt’ napr. definovanie transformačných parametrov, ktoré sa viažu k viacerým bodom. pŕıklad pomenovanej skupiny: observačné rovnice definovanie observačných rovńıc s neznámymi a observáciami tvoŕı základnú filozofiu aplikácie. vo všeobecnej teórii odhadu observačné rovnice zabezpečujú väzbu medzi observáciami, ktoré sú predmetom merania a určovanými neznámymi parametrami, ktoré sú predmetom (ciel’om) odhadu. vo všeobecnosti je matematický, resp. deterministiký model tvorený práve observačnými rovnicami, ktoré je možné v aplikácii l’ubovol’ne definovat’ a modifikovat’. observačné rovnice sa v programe sonet zapisujú v symbolickom tvare. v observačných rovniciach je možné použit’ l’ubovol’né matematické operátory a štandartné matematické funkcie. pŕıklad jednoduchej observačnej rovnice nivelácie je v nasledujúcej ukážke: h{i,j} = h{j} h{i}; kde v zátvorkách sú indexy pŕıslušnej observácie, resp. neznámych odhadovaných parametrov. okrem observácíı a odhadovaných neznámych je možné v observačných rovniciach použit’ aj d’aľsie premenné, ktorými sú metainformácie a parametre elipsoidov nač́ıtané z externých súborov. derivácie observačných rovńıc. program sonet vykonáva automaticky rozvoj zostavených observačných rovńıc do taylorovho radu, resp. automatické derivovanie týchto observačných rovńıc, tzn. linearizácia observačných rovńıc je riešená analyticky. geinformatics fce ctu 2006 151 špecifické aspekty spracovania geodet. sietí použitím prog. sonet metainformácie metainformácie umožňujú zaradenia určitých č́ıselných hodnôt do spracovania tak, aby sa tieto dali použit’ v symbolickom tvare v observačných rovniciach. č́ıselnými hodnotami reprezentujúcimi metainformácie môžu byt’ napr. časové značky, hodnoty teploty, tlaku, výšky teodolitov a terčov, ai. jednotlivé metainformácie sú obsiahnuté v elemente meta. v nasledujúcej ukážke je znázornené použitie týchto atribútov v elemente meta: pŕıklad siet’ jadrovej elektrárne mochovce v rokoch 1988 a 1989 sa uskutočnili opakované merania lokálnej geodetickej siete mochovce pomocou terestrických geodetických metód (merania d́lžok a vodorovných uhlov). schématické znázornenie meraných velič́ın je na obr. 1. v rokoch 2001, 2002, 2003 sa uskutočnili geodetické merania vybranej časti siete mochovce metódou gps (obr. 2) [6,7]. obrázok 1: terestrické observácie; symbolom ◦ sú označené body, na ktorých bolo vykonané uhlové meranie, symbolom — sú označené merania d́lžok medzi bodmi geodetickej siete. geinformatics fce ctu 2006 152 špecifické aspekty spracovania geodet. sietí použitím prog. sonet obrázok 2: body merané pomocou gps, symbolom 2 sú označené body merané aspoň v dvoch kampaniach v obdob́ı 2001 – 2003. matematický model predmetná geodetická siet’ bola spracovaná vo viacerých variantoch. tu je prezentované spoločné spracovanie terestrických merańı a gps s odhadom súradńıc, rýchlost́ı a transformačných parametroch je realizované modelom [5] (upravené):   x(t1) x(t2) ... x(tn) l(t1) l(t2) ... l(tn)   =   i(t1) d(t1) 0 · · · 0 i(t2) d(t2) t(t2) · · · 0 ... ... ... . . . ... i(tn) d(tn) 0 · · · t(tn) a(t1)l d (t1) l 0 · · · 0 a(t2)l d (t2) l 0 · · · 0 ... ... a (tp) l d (tp) l 0 · · · 0     y vy θ(t2) ... θ(tn)   , s kovariančnou maticou geinformatics fce ctu 2006 153 špecifické aspekty spracovania geodet. sietí použitím prog. sonet σ =   σ(t1) 0 · · · 0 0 0 0 0 ... ... . . . ... ... ... ... ... 0 0 · · · σ(tm) 0 0 0 0 0 0 0 0 σ(t1)l 0 · · · 0 ... ... ... ... ... ... . . . ... 0 0 0 0 0 0 · · · σ(tp)l   , kde i(ti) je matica väzieb medzi observáciami v i-tej epoche a odhadovanými súradnicami, a(ti)l je matica väzieb medzi terestrickými observáciami v i-tej epoche a odhadovanými geocentrickými karteziánskymi súradnicami, d(ti), d(ti)l je diagonálna matica definujúca väzbu medzi rýchlost’ami a pozorovaniami v i-tej epoche pre gps a terestrické observácie, t(ti) je matica väzieb medzi observáciami a odhadovanými súradnicami, x(ti) je vektor realizácíı v i-tej epoche, l(ti) sú terestrické observácie v i-tej epoche, σ(ti) je kovariančná matica súradńıc určených z gps v i-tej epoche, σ(ti)l je kovariančná matica terestrických observácíı v i-tej epoche, y sú výsledné súranice vztiahnuté k referenčnému rámcu 1. epochy, vy sú odhadnuté rýchlosti bodov, a θ(ti) sú odhadnuté transformačné parametre. matematický model definovaný vo vstupnom súbore programu matematická formulácia observačných rovńıc pre geocentrické karteziánske súradnice: xi = x0i + tx + vxi (t− t0) yi = y0i + ty + vyi (t− t0) zi = z0i + tz + vzi (t− t0) zápis týchto rovńıc v programe: matematická formulácia priestorovej d́lžky: sij = ( (xj + vxj (t− t0) −xi −vxi (t− t0)) 2+ (yj + vyj (t− t0) −yi −vyi (t− t0)) 2+ (zj + vzj (t− t0) −zi −vzi (t− t0)) 2 )1/2 zápis observačnej rovnice priestorovej d́lžky v programe: matematická formulácia vodorovného uhla ako rozdiel dvoch smerov: ωijk = arctan − sin li(∆xik+∆vxik )+cos li(∆yik+∆vyik ) − sin bi cos li(∆xik+∆vxik )−sin bisinli(∆yik+∆vyik )+cos bi(∆zik+∆vzik ) − arctan − sin li(∆xij +∆vxij )+cos li(∆yij +∆vyij ) − sin bi cos li(∆xij +∆vxij )−sin bi sin li(∆yij +∆vyij )+cos bi(∆zij +∆vzij ) zápis v programe: výsledky spoločného spracovania výsledky spoločného riešenia gps kampańı 2001, 2002, 2003 a terestrických observácíı v epochách 1988 a 1989 s odhadom rýchlost́ı monitorovaných bodov sú uvedené v tabul’kách 1, 2, 3 a 4. bod x [m] σx [mm] y [m] σy [mm] z [m] σz [mm] mo17 4036290.7995 4.7 1352165.7295 4.7 4734164.5202 4.7 mo23 4037308.4431 6.1 1344460.8285 5.7 4735535.1812 6.2 mo29 4037479.6022 6.4 1350279.8828 6.4 4733699.0043 6.4 mo24 4036408.9675 4.7 1345437.2847 4.7 4736041.2008 4.7 mo26 4036341.1893 4.7 1348846.0532 4.7 4735080.0009 4.7 mo28 4038676.6499 7.6 1347383.9801 7.6 4733558.2352 7.6 tabul’ka 1 odhadnuté geocentrické karteziánske súradnice bodov geodetickej siete mochovce. geinformatics fce ctu 2006 155 špecifické aspekty spracovania geodet. sietí použitím prog. sonet bod vx [m/rok] σvx [mm/rok] vy [m/rok] σvy [mm/rok] vz [m/rok] σvz [mm/rok] mo14 0.0059 5.9 -0.0051 5.2 -0.0053 5.9 mo17 0.0057 5.6 -0.0193 5.6 -0.0075 5.7 mo23 0.0191 7.6 -0.0017 5.8 -0.0043 8.2 mo29 0.0098 9.1 -0.0141 9.1 -0.0079 9.1 mo24 0.0050 5.4 -0.0157 5.4 -0.0043 5.5 mo26 0.0148 5.6 -0.0162 5.6 0.0010 5.6 mo28 0.0106 10.0 -0.0207 9.9 0.0050 10.0 tabul’ka 2 odhadnuté rýchlosti na bodoch geodetickej siete mochovce. bod b [°] σb [”] l [°] σl [”] h [m] σh [mm] mo17 53.589713 0.0002 20.578845 0.0003 213.2505 6.4 mo23 53.609830 0.0002 20.464701 0.0003 252.8440 6.4 mo24 53.617262 0.0002 20.482827 0.0003 267.6758 4.7 mo26 53.603304 0.0003 20.531516 0.0004 226.3663 7.6 mo28 53.580117 0.0002 20.499716 0.0003 258.4968 5.4 mo29 53.582673 0.0002 20.546452 0.0003 218.3034 4.7 tabul’ka 3 elipsoidické súradnice bodov siete na elipsoide wgs-84, na ktorých sa uskutočnili 2, resp. 3 merania gps. bod vn [m/rok] σvn [mm/rok] ve [m/rok] σve [mm/rok] vv [m/right] σvv [mm/rok] mo17 -0.0045 6.9 -0.0201 5.2 -0.0061 9.1 mo23 -0.0160 9.10 -0.0076 9.1 0.0085 9.1 mo24 -0.0027 5.6 -0.0164 5.6 -0.0034 5.6 mo26 -0.0060 9.9 -0.0201 9.9 0.0067 10.1 mo28 0.0007 4.9 -0.0229 5.8 0.0060 4.7 mo29 -0.0088 5.4 -0.0165 5.3 -0.0027 5.6 tabul’ka 4 odhadnuté rýchlosti bodov transformované do zložiek v horizontálnej rovine a vo výške. záver v článku sme sa zamerali na obecný popis univerzálneho softvérového prostredia, orientovaného na modelovanie, analýzu a spracovanie najmä geodetických siet́ı. teoretické základy softvéru sú položené do oblasti matematiky, resp. numerickej matematiky, informatiky, geodézie a štatistiky. v aplikácii je možné využit’ nielen matematické modely na riešenie geodetických siet́ı naznačené v tomto článku, ale prakticky l’ubovol’né matematické modely využitel’né na vyrovnanie geodetických siet́ı. program umožňuje spracovávat’ observácie opakovaných merańı (etapových, epochových a permanentných), ako aj kombinácie terestrických merańı s družicovými meraniami. uplatnenie nachádzajú napr. terestrické merania včlenené do riešenia družicovej siete, kde umožňujú zlepšit’ jej geometriu, výškovú zložku a pod. variabilnost’ programu v definovańı matematických modelov umožňuje ich rýchlu modifikáciu, čo dovol’uje zamerat’ sa predovšetkým na samotné modelovanie skúmanej geodetickej siete. táto vol’nost’ v defińıcii matematických modelov umožňuje nielen separované spracovanie a analýzu jednorozmerných, dvojrozmerných, trojrozmerných a štvorrozmerných geodetických geinformatics fce ctu 2006 156 špecifické aspekty spracovania geodet. sietí použitím prog. sonet siet́ı, ale aj ich modifikácie ako aj ich vzájomné kombinovanie s využit́ım globálnej kovariančnej matice. matematický model, resp. observačné rovnice tvoriace matematický model, sú v programe implementované vo forme rovńıc zaṕısaných v symbolickom tvare. praktickú funkčnost’ programu sme demonštrovali na riešeńı viacepochovej heterogénnej geodetickej siete. referencie 1. čepek, a.: the gnu gama project – adjustment of geodetic networks, acta polytechnica, vol. 42, no. 3, 2002. 2. dobeš, j. et. al.: presné lokálne geodetické siete. ed́ıcia výskumného ústavu geodézie a kartografie v bratislave, bratislava, 1990. 3. gerhátová, l’. integrované spracovanie družicových a terestrických merańı – dizertačná práca, bratislava, 2002. 4. harold, e. r., means, w. s.: xml in a nutshell, 2nd edition, o’reilly, 2002. 5. hefty, j.: globálny polohový systém v štvorrozmernej geodézii, bratislava, 2003. 6. hefty, j.: monitorovanie recentných pohybov litosféry v lokalite atómovej elektrárne mochovce pomocou geodetických metód, správa k úlohe v rámci zmluvy o dielo 04085-02, stu bratislava, 2002. 7. hefty, j.: geologické hodnotenie oblasti emo, meranie recentných pohybov v lokalite emo, stu bratislava, 2004. 8. charamza, f.: gso – an algorithm for solving linear least squares problems with possibly rank deficient matrices, referát vúgtk, praha, 1977. 9. klobušiak, m.: wigs – integrované geodetické siete, transformácie, spájanie, porovnanie, výpočet rýchlost́ı bodov a transformácie s-jtsk do xtrsyy [programový systém wigs 4.2002], vúgk & maklo, bratislava, 1995-2002. 10. kubáčková, l.: metódy spracovania experimentálnych merańı, veda, 1990. geinformatics fce ctu 2006 157 ________________________________________________________________________________ geoinformatics ctu fce 2011 40 the issue of documentation of hardly accessible historical monuments by using of photogrammetry and laser scanner techniques karol bartoš1, katarína pukanská1, juraj gajdošík2 and miroslav krajňák1 1 technical university košice, faculty of mining, ecology, process control and geotechnology, institute of geodesy and geographical information systems, košice, slovakia karol.bartos@tuke.sk, katarina.pukanska@tuke.sk, miroslav.krajnak@tuke.sk 2 geospol, s.r.o., urbánkova 64, košice, slovakia keywords: photogrammetry, 3d laser scanning, cultural heritage, slanec castle abstract: this article deals with issues of measuring hardly accessible historical monuments on the example of the slanec castle, slovakia. in the first phase the convergence case of close-range photogrammetry was applied using digital camera pentax k10d. subsequently was created its 3d model in the photomodeler scanner software. special attention was paid to shape of ground, surroundings and characteristic of object of interest about choice of the right method and technique of making digital images. processing of images was made with the highest possible accuracy with respect to the used method and apparatus. as a result of processing, the exact spatial model was made, which was exported to different formats. also digital photo-plan with real photo textures and vector drawings was made. in the next phase the whole object of castle was measured with the laser scanner leica scanstation c10 and the final point cloud was processed in the best available software. the results obtained by both methods were compared in comparable digital forma ts with respect to the positional accuracy of final models. in the final phase is planned to obtain images appropriate for convergence case of photogrammetry using digital camera placed on a carrier on the mikrokopter hexakopter controlled from the ground. then the final comparison and further analysis of all acquired models can be made. 1. introduction every ruin is a natural part of each cultural landscape, where creative human activities took place at least for a while. ruins can be seen either as a dilapidated building to be re-build perhaps even removed, or as a component which gives an inconvertible character to the landscape; highlights the picturesqueness and materializes its history. their life span is not eternal; it‟s limited, although we can prolong it by adequate preservation. ruins of historical monuments are mainly represented by the ruins of medieval castles in slovakia; they dominate the cliffs of carpathian mountains. many of today‟s ruins were once busy and comely buildings in which were happening lots of happy or tragic stories of their dwellers. nowadays, we can actually reconstruct these ruins we have technical opportunities, but it‟s appropriate just in some cases. however, we can also preserve and maintain them; and thereby vitalize them. not like erstwhile, but today‟s. do it in such a way that we can stay for a while, meditate and look into ourselves to see into the raison d'être. in a beautiful surroundings of a substantial time witnesses [1]. the objective of this paper is a comparison of laser scanning technology and digital photogrammetry for the documentation of historical monuments which are situated in a hardly accessible terrain with limited options of movement and selection of camera/scanner stations on the example of the slanec castle, slovakia. 2. the slanec castle – history and present the slanec castle is a gothic castle from the 13th century, destroyed and ruined in 1679. one three floor tower and several walls have been preserved; broken arch of a lancet window stands out architecturally. the ground plan of the castle is a simple rectangular shape, trapezoidal narrowed on the northwest side (fig. 1a). there is a four floor high rounded donjon (fig. 1b). beside the tower is the wall of the gothic palace (fig. 1b), with preserved gothic window of the chapel and beautiful gothic arch beams. [2,3] mailto:karol.bartos@tuke.sk mailto:katarina.pukanska@tuke.sk ________________________________________________________________________________ geoinformatics ctu fce 2011 41 figure 1: a) ground plan of the slanec castle and b) details of the donjon and remains of the wall 3. reconnaissance of the locality the whole object of the castle ruins is located in hardly accessible terrain on a conical hill of volcanic origin. it‟s surrounded by steep slopes overgrown with briery bushes with very limited possibilities of movement on the southern and western side. there is relatively dense beech wood obscuring the entire castle ruins on the north and east side. in addition, part of the ruins is buried and overgrown with vegetation in a great measure. all these conditions make it difficult to approach, surveying the castle ruins, handling with instrumentation and they degrade the results measurement. therefore a combination of multiple methods for measuring individual parts was chosen and in the end comparing of particular results with respect to their accuracy. the measuring was divided into three consecutive stages: 1. close-range photogrammetry with convergent images acquired from the ground 2. terrestrial laser scanning 3. aerial close-range photogrammetry with convergent images acquired with digital camera placed on a carrier on the mikrokopter hexakopter controlled from the ground 4. photogrammetric measurement of recent situation 4.1 imaging the imaging was executed for appropriate and accessible parts of the castle, i.e. wall of the gothic palace, using convergent imaging with general orientation of image axis. the measurement of the tower (donjon) using this method was evaluated as inappropriate or even impossible due to its shape and location. for measurement of recent situation were used points signalized by: naturally: with measurement accuracy of 3 pix (corners, edges, intersections), artificially: with manual measurement accuracy of 0,5 – 1 pix. major part of points was signalised naturally due to the dangerous installation of measuring marks straight on the wall. artificially signalised points, used as a ground control points, were represented by 10 retro-reflective targets placed on the wall with the help of a professional rock climber. in order to situate the final model into the national coordinate system jtsk (datum of uniform trigonometric cadastral network), the real world coordinates of these points were defined by electronic tachymeter leica tcr 305. surveying with leica tcr 305 was performed from 3 fixed points, which were defined by the rtk method of the gps technology. jtsk national coordinate system was used for the purpose of comparison with results from other stages. 4.2 accuracy analysis total a priori accuracy of the final model can be defined as follows: (1) where: is the external accuracy given by the standard positional error of ground control points (10 retro targets) ________________________________________________________________________________ geoinformatics ctu fce 2011 42 p is the internal accuracy estimated by the formula: (2) where: s is the standard error of coordinates image, ms is the averaged image scale number, q is the configuration factor; for convergent photogrammetry reaches values 0,5 – 0,7; and k is the average number of images taken from one camera position. due to different distances from the camera stations to the wall, which affects the image scale number, two estimations of the total theoretical a priori standard positional error were determined. one for the lower parts and one for the upper parts of the wall: 4.3 technology and software gps measurement of 3 fixed initial points 5001, 5002 and 5003 (fig. 5) was performed with leica gps900 cs. because of the use of rtk surveying method instead of static gps survey, the coordinates of these 3 points were determined by averaging of the values of 5 measurements for relatively better accuracy of final coordinates. horizontal accuracy of the leica gps900 cs is 10 mm + 1 ppm and vertical 20 mm + 1 ppm. positional measurement of 10 retro-reflective targets was realized in the national coordinate system jtsk using electronic tachymeter leica tcr 305. standard angular deviation of the instrument is 5cc according to din 18723 / iso 12857. accuracy of the infrared distance measurement is 2 mm ± 2 ppm. digital slr pentax k10d with sigma 17-70 mm lens (fig. 2) were used to acquire images. all images were acquired with a focal distance of 17 mm. camera calibration for given focal length was performed before by using of square calibration grid. figure 2: pentax k10d with sigma 17-70 mm lens and its technical specification camera calibration as well as evaluation of all images of the castle wall was performed in the photomodeler scanner software. photomodeler scanner solves calculations of justified components of the exterior orientation on the basis of perspective transformation. spatial coordinates of joining and determining points are calculated by spatial intersection based on known parameters of exterior orientation. calculation of the spatial coordinates of projection centers, ground control points and observation points is realized by bundle adjustment of perspective rays in the reference coordinate system. 4.4 visualization of the 3d model before processing all of images were idealized on the basis of calibration parameters in order to remove lens distortion. the ideal central projection, which unlike the real distorted projection of lens doesn‟t deform lines to curves, was achieved by using the idealization function. idealization of images enables to achieve high quality textures without background showing-through all over the object. the final set of 3d points was created by successive orientation of prepared images and then by referencing identical points. it was necessary to assign the scale and rotation in space to this set of points by defining the xyz coordinates of 3 points retro-reflective targets. total of 1639 points on 50 images have been used to create the final model with average number of 189 points per one image and with average intersection angle of 73°. subsequently the wireframe model of castle wall was made by joining individual points. ________________________________________________________________________________ geoinformatics ctu fce 2011 43 individual surfaces were assigned to the each area bordered by lines and textures from images were loaded for these surfaces. lastly the final 3d vector model with real photo textures was created after loading and processing of all textures (fig. 3). the final spatial accuracy of points ranged from 0,4 cm in direction of z-axis to 2,97 cm again in the direction of z -axis. the value of average length of the rms vector was 2,13 cm and the total rms reached value of 2,506 pixel. the accuracy of the final model was controlled directly in the photomodeler scanner software by means of confidence ellipses (fig. 4b). the final model was exported into the interchangeable .dxf format for further comparison. figure 3: the 3d vector model with displayed textures and an axonometric view of the wall figure 4: a) the deployment of camera stations, b) confidence ellipses displayed for the gothic window 5. laser scanning 5.1 the scanning of the slanec castle the scanning was carried out for the whole object of the castle, including buried and overgrown parts, in the coordinate system jtsk. following types of data were recorded during laser scanning: range r, horizontal direction , vertical angle , intensity of the reflected laser signal at each point. three scan stations over a known point – determined by the rtk method of the gps technology (as described in chap. 4.3); and three temporary stations (5101, 5102 and 5103) (fig. 5) which were surveyed by the method of oriented distance by using of the laser scanner; were used to scan the whole object. a direct georeferencing was applied to transform the registered point cloud of the whole object into our chosen national coordinate system. i.e. the scanner was set up over a known point (and its height over the point measured), centred, levelled and oriented towards another known target (backsight), like a total station. the position and orientation information as well as the instrument height was entered into the scanner software before the scanning. ________________________________________________________________________________ geoinformatics ctu fce 2011 44 figure 5: general map of the final scan from above the accuracy of direct georeferencing depends on: the accuracy of the scanner centering, leveling and backsighting and measuring the instrument height; the accuracy of the survey control, i.e. the control points on which the scanner (and possibly the backsight target) is placed.[3] station orientation ∆x[mm] ∆y[mm] ∆z[mm] ∆d[mm] 5003 5001 -14 2 -16 -14 5001 5003 17 -2 17 -17 5101 5001 0 0 3 0 5002 5001 -14 -12 -14 -15 5102 5001 -7 2 -3 7 5101 5102 0 1 0 -1 table 1 : registration accuracy of particular scans from 6 scan stations the final point cloud was processed by the use of leica cyclone 7.0 software (fig. 6). ________________________________________________________________________________ geoinformatics ctu fce 2011 45 figure 6: southern and eastern isometric view of the final scan 5.2 technology and software the laser scanning was performed by the laser scanner leica scanstation c10. the orientation on each of the stations was realized through the centred and levelled leica hds 6" circular tilt & turn target. figure 7: a) leica scanstation c10 over a scan station, b) its technical specification 5.3 processing of the final point cloud the 3d model of the wall created on the basis of the close range photogrammetry (chap. 4.4) and the point cloud of the whole castle were available after the second stage. point cloud of the castle tower and point cloud of the gothic palace wall were exported separately just for the purpose of this paper. both of clouds were exported from the leica cyclone software into the .pts format and opened through the trimble realworks 6.5 software, respectively leica cyclone 7.0. subsequently the point cloud of the wall was cleared of all unwanted points (branches, bushes, grass). adjusted point cloud of the wall contained 7 656 813 points. due to the large number of points and hence the impossibility of their processing using a desktop computer this cloud was resampled by means of spatial sampling with a distance of 20,00 mm between points. final set of points consisted of 863 183 points. corresponding mesh model was generated from this point cloud. afterwards it was edited and adjusted by removing peaks, refining and smoothing (fig. 8a). for the point cloud of the castle tower was considered to make a spatial model by using of the leica cyclone 7.0 software, to generate a final mesh, try to model all missing parts and finally to make horizontal cross sections. unfortunately, due to the unexpected software problems and problems with importing a point cloud into the software till completion of this ________________________________________________________________________________ geoinformatics ctu fce 2011 46 paper, generating of the final mesh failed. the point cloud was used to model main part of the tower for the purpose of horizontal cross sections only (fig 8b). figure 8: a) the final mesh of the palace wall, b) cross sections of the castle tower 6. results from the first and second stage since both models of the wall, i.e. 3d model created on the basis of the close range photogrammetry and mesh from the laser scanning; were created and embedded into the same coordinate system jtsk on the basis of measurements performed from the same fixed points (5001, 5002 and 5003) determined by the gps technology; they are at the same level of accuracy given by the accuracy of gps determination of fixed point coordinates. thus they can be opened simultaneously as a one project and compared together. for this purpose a cutting plane tool was used to generate a multiple horizontal slices with intervals of 200,00 mm and a thickness of 10,00 mm. the resulting comparison of three cutting planes situated in three different levels level of the terrain, level of the gothic window and the upper level with remnants of the cantilevers can be seen in the fig. 8. the third stage, using the mikrokopter hexakopter, is beyond the scope of this paper mainly due to bad wind conditions during survey task (the castle hill is a relatively strong windy place). this stage is planned to take place as soon as possible to use its result for completing deficient places of the upper parts of models. 7. conclusion photogrammetry together with laser scanning can be regarded as the predominant method of surveying historical objects. advantages of photogrammetry compared to classical geodetic methods consist in: contactless measurements; a large saving of field works; high geometric and temporal resolution; constant visual contact with the measured object during the treatment process and possibility of projection photo-real textural information about measured surfaces into the result. it`s important to take into account the specific conditions in the area of measured object while choosing photogrammetric methods, especially in terms of terrain, territory size, options in choices of viewpoints and undesirable objects which could be situated near the object and shade it. lots of historical monuments in the form of medieval castles are situated in hardly accessible terrain. afterwards, the problem of correct measurement and survey of these objects arises, either by use of photogrammetric or geodetic methods. daresay combination of multiple photogrammetric methods using the latest technology may be the right way. nowadays it is not just about the available technologies but especially about acquisition and investment of sufficient resources into the quality survey of historical monuments. although slovakia dispose of the newest technology, but unfortunately compared to the other countries where large attention is devoted to systematic photogrammetric documentation of historical monuments, slovakia still has a lot to catch up. ________________________________________________________________________________ geoinformatics ctu fce 2011 47 figure 9: comparison of horizontal cross sections through both models 8. references [1] the team of authors: protection of ruins in a cultural country, lietava, association for rescue of the lietava castle, 2006. [2] petro, f.: attractions in history of the slanec and surroundings, obecný úrad slanec [3] reshetyuk, y.: terrestrial laser scanning, leipzig, vdm verlag dr. muller aktiengesellschaft & co. kg, 2009. [4] fraštia, m.: possibilities of using inexpensive digital cameras in applications of cloes-range photogrammetry, slovak journal of civil engineering, 2005/2, 20-28. ]5] luhmann, t., robson, s., kyle, s., harley, i.: close range photogrammetry: principles, methods and applications, caithness, whittles publishing, 2006. [6] labant, s., kalatovičová, l., kukučka, p., weiss, e.: precision of gnss instruments by static method comparing in real time, acta montanistica slovaca 14 (2009) 1, 55-61. application of e-learning in the temap project markéta potůčková, tomáš bayer department of applied geoinformatics and cartography faculty of science, charles university in prague albertov 6, 128 43 praha 2, czech republic marketa.potuckova@natur.cuni.cz abstract map collections belong to valuable parts of the cultural heritage of each nation. the temap project focuses on development and utilisation of modern digital technologies for on-line accessibility of map collections as well as their cataloguing and protection. the article introduces the concept of an e-learning course that aims at explanation of methodologies and software tools applied and developed in scope of the project. the main focus is on the modules georeferencing and cartometric analysis that comprise description of the on-line tool georeferencer and free software packages mapanalyst and detectproj. keywords: e-learning, temap project, georeferencing, cartometric analysis 1. introduction old cartographic works always attracted professionals as geographers, cartographers, surveyors or historians but also general public. fast development of digital technologies and possibility of accessing scanned maps on the internet have even increased an interest in map collections comprising early cartographic works as well as modern cartographic products. they become important gis layers in environmental studies (e.g. land use changes, development of settlements), scientific conferences and publications in the field of historical cartography and geography are numerous (e.g. [1], [2], [3]). contrary to this attention of map users many of map collections lack a thorough evidence of their cartographic works; their funds are not systematically catalogued and digitised. the project “technology for access to czech map collections: methodology and software for protection and re-use of national cartographic heritage” (temap) reflects the needs of memory institutions to process but also protect and make accessible their map collections. by combining knowledge in bibliography, cartography and web technologies the project aims at creating a set of open-source software tools and methodologies that enable processing of old maps and cartographic documents with regard to their cataloguing, cartographic correctness and publishing on the web. the methodology of cataloguing is based on the resource description and access (rda) rules for cartographic documents published by joint steering committee for rda in june 2010. the georeferencer and the detectproj are the main software tools for a map georeferencing and analysis developed within the project and they will be described later in this article. a software solution based on the geonetwork opensource metadata catalogue, postgresql database system and the mapserver/geoserver for management, searching, publishing and on-line presentation of cartographic documents in a web application is also one of the project goals. the developed methodologies and software solutions will be tested on selected cartographic works from the map collections of the partners of the temap project the moravian library, masaryk university in brno and charles geoinformatics fce ctu 9, 2012 91 potůčková m., bayer t.: application of e-learning in the temap project university in prague. the project started in 2011 and it is planned to be finished in 2015. it is supported by the czech ministry of culture. more information can be found on the project homepage http://www.temap.cz/. the project outputs are meant to be of benefit of general and professional public. in order to explain the basics of the used technologies and possibilities of their applications, an e-learning course “old maps – digitizing, cataloguing, analysis” is being designed as a supplement of the project and it is going to be opened for public at the end of 2013. this article first describes a general outline of this course and afterwards it focuses on two of its modules that are related to geoinformatics, namely georeferencing and cartometric analysis. 2. contents of the course map collections specialized on early and old cartographic works are nowadays available on the internet, e.g. david rumsey map collection [4], afriterra [5], the moravian library [6] as well as tools for their georeferencing, e.g. maprectifier [7], worldmap warp [8], eharta [9], georeferencer [10], and analysis mapanalyst [11]. several conferences on cartographic heritage took place in the last decade e.g. [12], [13], [14] and scientific papers on analyses of old maps and their utilization in different projects are numerous. the e-learning part of the temap project combines information from these sources with new methodologies and software tools developed in the scope of the project. it uses the moodle learning management system [15] as the platform and it is divided into four modules: • technology of digitizing old maps • principles of cataloguing old maps • georeferencing • cartometric analysis the module technology for digitizing old maps comprises information on technical requirements on scanners suitable for scanning old maps regarding the type of the scanner, its calibration and testing, resolution and formats of output files. issues of protecting valuable old prints as well as problems connected with scanning maps from atlases are mentioned. up-to-date and preferably open-source software tools for creating previews and image tiles usable for on-line publishing are presented. the module is supplemented with examples of solutions utilized in the temap project. objectives of the second module are to introduce the resource description and access (rda) rules applied on cartographic documents and the methodology for cataloguing old maps deposited at the map collection of charles university in prague. it includes formal descriptors of old maps based on the second edition of anglo-american cataloguing rules (aarc2/r), marc 21 format for bibliographic data and international standard bibliographic description for cartographic materials (isbd (cm)) [16]. attention is paid to special features of early maps (extra editions, reprints, scale). examples of maps that have been already processed and can be searched from the charles university centralized catalogue (http://ckis.cuni.cz) are given. modules georeferencing and cartometric analysis are discussed in detail in the following chapters. geoinformatics fce ctu 9, 2012 92 http://www.temap.cz/ http://ckis.cuni.cz potůčková m., bayer t.: application of e-learning in the temap project 3. module georeferencing from the point of view of the temap project, old maps lacking reliable geographic coordinates and scale need to be transformed to a selected georeference system for the purpose of cataloguing (definition of metadata, namely a scale and a map “bounding box”) and their cartometric analysis. the georeferencing module focuses on • image transformations suitable for old maps, • selection of identical points, • available on-line georeferencing tools. theoretical bases are explained in a form of a presentation supplemented with articles explaining geometric transformations [17]. short introduction and links to georeferencing tools such as maprectifier [7] and worldmap warp [8] is given including some technological details. nevertheless, most attention is paid to the georeferencer which extension is one of the project goals. georeferencer this on-line georeferencing tool has been designed and implemented by klokan technologies and the team of the moravian library [10]. using openlayers technology [18] on a client side, the georeferencer allows opening a map that has been already published on the web and collecting identical points that can be identified in a reference map as well. on the server side, the map to be georeferenced must be published using tiles (tile map service [19] or zoomify technology [20]) [21]. the openstreetmap is provided as a reference map in the current version of the software. after registration a user can upload a map of interest and start with collecting identical points (figure 1). the advantage is that the points are saved (including their history) and the georeference of a selected map can be changed or improved at any time. the information about the placement of the map in mercator projection used in most web applications [22] can be exported as esri world file or ogc kml (google earth) format. georeferenced maps can be also visualized in google earth (figure 1). moreover, a cartometric analysis using the mapanalyst (see chapter 4) can be also performed on-line. within the temap project the georeferencer is going to be extended of some new features such as support for cartographic projections or search for maps by image similarity [16]. after an internal pilot at the moravian library and the first public pilot at the national library of scotland, the georeferencer has been recently employed at the british library for crowdsourcing of spatial metadata capture for two of its most important collections of historic mapping of britain [23]. 4. module cartometric analysis the goal of a cartometric analysis in the temap project is an assessment of planimetric accuracy and estimation of cartographic parameters of old maps (i.e. map projection and its properties). all analyses are based on a set of corresponding 0d-2d elements in the analyzed and reference maps; algorithms use various methods of robust statistics, point pattern geoinformatics fce ctu 9, 2012 93 potůčková m., bayer t.: application of e-learning in the temap project figure 1: the georeferencer interface for collecting identical points (up). example of emanuel bowen’s map of germany from 1747 published by w. innys et al. (david rumsey map collection [4]). visualisation of the georeferenced map in google earth (down). analysis, shape matching and genetic algorithms. thus, it is necessary to keep in mind that obtained results are dependent on selection of identical points and their number. the e-learning module cartometric analysis concentrates on following features: • residuals on identical points, • positional displacement, • evaluation of planimetric accuracy, • determination of the scale, • analysis of scale in relation to the geographic position, • analysis of rotation in relation to the geographic position, geoinformatics fce ctu 9, 2012 94 potůčková m., bayer t.: application of e-learning in the temap project • estimation of the map projection. theoretical background and methodology of a possible analysis is explained on examples of previous authors’ publications [24], [25]. two free software solutions for a map analysis are introduced and supplemented with data for their evaluation. mapanalyst mapanalyst is a well known and widely applied tool for a cartometric analysis of old maps [26]. it is an open-source java application developed at the institute of cartography ethzurich with following functions: • georeferencing of an early map to a reference map using a set of identical points • visualisation of – distorted graticule – residuals on identical points – isolines of scale distortion – isolines of rotation depending on the number of identical points and the geometrical characteristics of evaluated maps, helmert 4 parameters or affine 5 or 6 parameters transformations can be chosen. moreover, robust huber estimator, v-estimator or hampel estimator can be applied. the mapanalyst user interface is very simple and intuitive so the tool is also suitable for users with only basic knowledge in cartography. openstreetmap is set as a default reference map but it can be replaced with any georeferenced image file. identical points can be collected either in the mapanalyst or can be imported from a text file. results of the analysis can be exported in several vector (shp, svg, wmf, dxf) and raster (jpeg, png) formats. figure 2 shows graphical outputs of an analysis of the map of bohemia by p. kaerius (1620) from the mapanalyst [24]. detectproj one of the aims of a cartometric analysis is to identify the cartographic projection used for a map construction. the procedure is usually based on a trial-and-error method [26]. the free software package detectproj [27] enables to detect an unknown map projection and estimate its parameters based on the analysis of a set of corresponding 0d-2d elements measured both in the analysed and the reference maps. points of the graticule can also be added in order to increase the reliability of the calculation. due to the fact that early maps are not constructed on solid geometric basis, some of the drawn map elements are highly affected by errors. the detectproj software also contains a utility automatically excluding incorrectly drawn map elements from a further cartometric analysis. the output parameters are an estimated category of a cartographic projection, the latitude of a true parallel ϕ0, true meridian λ0 and a position of the cartographic pole ϕq,λq. the calculation can be done in the normal, transverse or oblique aspects of the projection. early maps were created without any geodetic bases and exact geometric procedures therefore it geoinformatics fce ctu 9, 2012 95 potůčková m., bayer t.: application of e-learning in the temap project figure 2: displacement on identical points (left) and isolines of scale (m/1000) (right) on the map of bohemia by p. kaerius (1620) created in mapanalyst (the isolines were further processed in arcgis) [24] is impossible (and unreasonable) to identify the cartographic projection. the outputs of the detectproj then have only an orientation value. the software supports 55 cartographic projections but there is an option to add a new projection by editing an input definition file. the output is a text file comprising parameters of an estimated projection and values of the evaluation criteria. the concept and methods of control of the software are similar to the cartographic projection library proj 4 [28]. however, it includes a new kernel for cartographic computations and analyses. it is available for os windows, gnu/linux and macos. [27] from the point of view of cataloguing of early maps, the detectproj is valuable for obtaining metadata information about the cartographic projection of map sheets. it can be combined with the georeferencer when collected control points can be used both for the definition of a bounding box of the map and estimation of the cartographic projection. in comparison to the georeferencer and the mapanalyst, the outputs of detectproj are not presented graphically but only in the text format. on the other hand the software offers more sophisticated approach and provides information about criteria used for the estimation of the map projection which requires deeper knowledge in cartography and coordinate transformations. a detailed explanation of used algorithms and criteria is out of the scope of the e-learning course. nevertheless, some short explanation with references is a part of the course. following example shows a part of an output file from the detectproj applied on the map “europe politique” from the atlas st. cyr. furne edited by jouvet et cie in 1882 that was created in bonne projection (figure 3). 5. conclusion the presented e-learning course provides learning materials from two different fields, bibliography and geoinformatics, and it is in the first place intended to help and to train personnel from institutions dealing with digitizing and cataloguing map collections. according to the knowledge of the authors such a course is not available on-line yet. gathering metadata and geoinformatics fce ctu 9, 2012 96 potůčková m., bayer t.: application of e-learning in the temap project test points: europe_test_50.txt reference points: europe_reference_50.txt 28 test point(s) have been loaded. 28 reference point(s) have been loaded. 56 cartographic projection(s) have been loaded. ... 0 meridians and 0 parallels have been found... completed. ... results containg values of the criteria: # proj categ latp lonp lat0 lon0 bkey cnd[m] homt[m] + mt helt[m] + mt vdtf 1 bonne psconi 90.0 0.0 54.7 20.2 100% 1.41e+4 1.39e+4 100% 1.39e+4 100% 2.79e+0 2 lcc coni 90.0 0.0 52.9 20.0 100% 1.95e+4 1.63e+4 100% 1.62e+4 100% 2.90e+0 3 eqdc coni 90.0 0.0 53.0 20.1 100% 2.12e+4 1.86e+4 100% 1.86e+4 100% 4.56e+0 4 poly polyco 90.0 0.0 10.0 20.1 100% 2.40e+4 2.41e+4 100% 2.40e+4 100% 2.87e+0 5 aea coni 90.0 0.0 54.0 19.9 100% 2.50e+4 2.26e+4 100% 2.25e+4 100% 4.59e+0 6 wer psazim 90.0 0.0 0.0 20.0 100% 2.42e+4 2.49e+4 100% 2.48e+4 100% 4.58e+0 7 leac coni 90.0 0.0 47.2 19.9 100% 3.38e+4 2.85e+4 100% 2.84e+4 100% 4.61e+0 8 eqdc2 coni 90.0 0.0 43.5 19.9 100% 3.79e+4 3.41e+4 96% 3.41e+4 96% 4.73e+0 9 stere azim 90.0 0.0 0.0 19.9 100% 7.74e+4 7.17e+4 82% 7.17e+4 82% 4.86e+0 10 solo azim 90.0 0.0 0.0 19.9 100% 8.06e+4 7.42e+4 78% 7.42e+4 78% 4.88e+0 --------------------------------------------------------------------------------------------proj detected projection categ category of projection latp,longp geographical coordinates of the cartographic pole lat0 latitude of the true paralel lon0 longitude of the prime meridian bkey percentage of points fitting used geometrical model (without detected outliers) cnd criterion based on cross nearest distance homt standard deviation on identical points after homothetic transformation helt standard deviation on identical points after helmert transformation vdtf criterion based on the voronoi diagram similarity +mt refers to ratio of tested points inside of the tissot’s indicatrixes constructed on the reference points figure 3: detection of a projection applied on the map “europe politique” from the atlas st. cyr. furne edited by jouvet et cie in 1882 (david rumsey map collection [4]). the upper part of the figure shows an overview of an original map (left) and the graticule of the detected bonne projection (right). the text bellow is a subset of the detectproj software output file supplemented with an explanation of abbreviations of selected criteria. geoinformatics fce ctu 9, 2012 97 potůčková m., bayer t.: application of e-learning in the temap project georeferencing of old maps is a tedious and expensive work. crowdsourcing of this process seems to be a meaningful solution as the pilots from the uk have shown [23]. in this case the course can help to those volunteers who have a deeper interest in cartography. the course can possibly guide them so the obtained metadata will be more reliable. moreover, the course will support lectures in cartographic subjects such as history of cartography that are a part of curricula of the master degree in cartography and geoinformatics at charles university in prague. the described contents of the e-learning course is now being implemented in the moodle learning management system running on the server of charles university [29]. during 2013 the czech version will be completed and its english counterpart is going to be finished a year after. acknowledgement development of the presented e-learning course is done with the support of the ministry of culture of the czech republic, nr. df11p01ovv003: temap: technology for access to czech map collections: methodology and soft ware for protection and re-use of national cartographic heritage. references [1] 5th international workshop on digital approaches in cartographic heritage in vienna, february 22–24, 2010, http://cartography.tuwien.ac.at/cartoheritage/ [2] gartner, g., ortag, f. (eds.): cartography in central and eastern europe, springer verlag berlin heidelberg 2010, isbn 978-3-642-03294-3. [3] liebenberg, e., demhardt, i. j. (eds.): history of cartography, springer verlag berlin heidelberg 2012, isbn 978-3-642-19087-2 [4] david rumsey map collection, http://www.davidrumsey.com/ [5] afriterra cartographic library and archive, http://www.afriterra.org/ [6] moll’s map collection, the moravian library, http://mapy.mzk.cz/en/mollova-sbirka/ [7] maprectifier, http://labs.metacarta.com/rectifier/ [8] worldmap warp, http://warp.worldmap.harvard.edu/ [9] eharta, http://earth.unibuc.ro/articole/eharta?lang=en [10] georeferencer http://www.georeferencer.org/ [11] mapanalyst, http://mapanalyst.org/ [12] workshop on archiving in digital cartography and geoinformation, berlin, germany, december 4-5, 2008, http://www.codata-germany.org/archiving_2008/ [13] ica commission on digital technologies in cartographic heritage, http://xeee.web. auth.gr/ica-heritage/ [14] international conference: historic maps and imagery for modern scientific applications ii, bern, october 1-3, 2009, http://maps.unibe.ch/2009.html geoinformatics fce ctu 9, 2012 98 http://cartography.tuwien.ac.at/cartoheritage/ http://www.davidrumsey.com/ http://www.afriterra.org/ http://mapy.mzk.cz/en/mollova-sbirka/ http://labs.metacarta.com/rectifier/ http://warp.worldmap.harvard.edu/ http://earth.unibuc.ro/articole/eharta?lang=en http://www.georeferencer.org/ http://mapanalyst.org/ http://www.codata-germany.org/archiving_2008/ http://xeee.web.auth.gr/ica-heritage/ http://xeee.web.auth.gr/ica-heritage/ http://maps.unibe.ch/2009.html potůčková m., bayer t.: application of e-learning in the temap project [15] moodle learning management system, https://moodle.org/ [16] žabička, p., přidal, p., konečný, m., novotná, e.: temap : technology for discovering of map collections. poster. in.: 15. international conference of historical geographers. prague : charles university, faculty of science, august 6.-10., 2012. [17] boutoura, c., livieratos, e.: some fundamentals for the study of the geometry of early maps by comparative methods, e-perimetron, vol.1, no. 1, winter 2006, http://www. e-perimetron.org/vol_1_1/boutoura_livieratos/1_1_boutoura_livieratos.pdf [18] openlayers, http://openlayers.org/ [19] tile map service specification, http://wiki.osgeo.org/wiki/tile_map_service_ specification [20] zoomify technology, http://www.zoomify.com/ [21] přidal, p., žabička, p.: tiles as an approach to on-line publishing of scanned old maps, vedute and other historical documents, e-perimetron, vol. 3, no. 1, 2008, http://www.e-perimetron.org/vol_3_1/pridal_zabicka.pdf [22] spherical mercator projection, http://trac.osgeo.org/openlayers/wiki/sphericalmercator [23] kowal, k. c., přidal, p.: online georeferencing for libraries: the british library implementation of georeferencer for spatial metadata enhancement and public engagement, journal of map & geography libraries: advances in geospatial information, collections & archives, 8:3,2012, pp. 276-289 [24] bayer, t., potůčková, m., čábelka, m.: kartometrická analýza starých map českých zemí: mapa čech a mapa moravy od petra kaeria. in: geografie sborník české geografické společnosti 2009/3, s. 230-243, issn 1212-0014. [25] bayer, t., potůčková, m., čábelka, m.: cartometric analysis of old maps on example of vogt's map. in: ica symposium on cartography for central and eastern europe, lng&c, pp 509-522, vienna, springer, 2009, isbn: 978-3-642-03293-6. [26] jenny, b., hurni, l.: studying cartographic heritage: analysis and visualization of geometric distortions, computers & graphics 35 , 2011, pp. 402–411 [27] detectproj software and manual, http://web.natur.cuni.cz/~bayertom/software.html [28] proj.4 library, http://trac.osgeo.org/proj/ [29] moodle charles university in prague, http://dl2.cuni.cz/ geoinformatics fce ctu 9, 2012 99 https://moodle.org/ http://www.e-perimetron.org/vol_1_1/boutoura_livieratos/1_1_boutoura_livieratos.pdf http://www.e-perimetron.org/vol_1_1/boutoura_livieratos/1_1_boutoura_livieratos.pdf http://openlayers.org/ http://wiki.osgeo.org/wiki/tile_map_service_specification http://wiki.osgeo.org/wiki/tile_map_service_specification http://www.zoomify.com/ http://www.e-perimetron.org/vol_3_1/pridal_zabicka.pdf http://trac.osgeo.org/openlayers/wiki/sphericalmercator http://web.natur.cuni.cz/~bayertom/software.html http://trac.osgeo.org/proj/ http://dl2.cuni.cz/ developing web map application based on user centered design petr voldan department of mapping and cartography faculty of civil engineering, czech technical university in prague petr.voldan fsv.cvut.cz keywords: user centered design, web mapping service, user interface, usability abstract user centred design is an approach in process of development any kind of human product where the main idea is to create a product for the end user. this article presents user centred design method in developing web mapping services. this method can be split into four main phases – user research, creation of concepts, developing with usability research and lunch of product. the article describes each part of this phase with an aim to provide guidelines for developers and primarily with an aim to improve the usability of web mapping services. introduction it is possible to say that the need to involve usability technics into development of different types of web applications is widely accepted. unfortunately, the author’s experience based on his study and projects is different in the czech environment. we still face the development problems where "form follows the functions". this approach makes sense for the inner parts of an application – the parts concealed from a user. however, when it comes to the parts of product that are user-facing (the buttons, displays, labels, and so forth) the "correct" form isn’t dictated by functionality at all. instead, it’s dictated by the psychology and behavior of the users themselves [9]. according to [10] current body of research is highly significant in the development of gis for professionals, but it seems that the common user has been neglected or almost forgotten. on the other hand there is a lot of research focusing on user usability of the common web applications – as are e-shops or reservation systems. but although web maps work in web browsers with a graphic user interface, they are quite different from general web pages or computer applications [21]. we should also focus on the usability mapping portals because web maps have became popular on the web due to their convenience and low cost [12]. the numbers of users with specific abilities, knowledge or requirements are increasing. today’s web map applications don’t only provide the maps, but also provide various mapping tools or services – finding addresses, travel planning or finding points of interest. unfortunately a lot of people have problems to control a map or to use the mentioned services [20]. these facts require a more specific approach to a development that is focused not only on information technology but also on casual users. in terms of users there is a list of techniques called user centered design (ucd). this article uses the experiences of the author with ucd from the professional area and tries to offer this method in field of geoinformatics. geoinformatics fce ctu 2011 131 voldan p.: developing web map application based on user centered design user centered design definition ucd as a multi-disciplinary activity, which incorporates human factors and ergonomics knowledge and techniques with the objective of enhancing effectiveness and productivity, improving human working conditions, and counteracting the possible adverse effects of use on human health, safety and performance [2]. the iso 13407 standard [1] provides a list of instruction stating how to use human-centered activities during development process, but does not specify any exact methods. ucd can be split into four main phases – analysis, design, implementation and lunch of product. relations between mentioned phases are shown on figure 1. figure 1: ucd phases why matter "good design is good business" – thomas j. watson the mentioned iso 13407 standard says that human-centered systems empower users and motivate them to learn. the benefits can include increased productivity, enhanced quality of work, reductions in support and training costs and improved user health and safety. if we choose the main benefits, which states [2] then ucd can 1. reduce development time • focusing on users and their needs reduces the late changes and additional cost of future redesign to make a new and more usable version of a product 2. increase sales • marketing of usable product is easier • product will have higher rating in the trade press 3. save users time geoinformatics fce ctu 2011 132 voldan p.: developing web map application based on user centered design • reduce task time • increase productivity • users make fewer errors and this fact leads to increased quality of service • reduce the need for training, requiring assistance 4. observe legislation • public offices have to provide information based on the law 365/2000 in the czech republic. this law describes access to information for persons with disabilities. c. m. karat published first analysis of ucd in terms of cost-benefit in 1991 [11]. the analysis shows a 2:1 dollar savings-to-cost ratio for a relatively small development project and a 100:1 savings-to-cost ratio for a large development project. we shouldn’t look for the advantages only in financial terms, but especially focus on the end user – as is mentioned in the name of this method. it is the end user, which we should keep in mind during all developing processes. ucd phases analysis developing good product doesn’t start with sketching of interfaces or programming. before you write the first line of a code or sketch the first segment of a product interface you should answer the basic questions: • what do users want? • what do we want? these questions can uncover basic product objectives and user’s needs. of course there are more objectives then user’s needs. in software creation there are generally more requirements: • business/marketing goals • functional or technical requirements initial analysis saves time and money in next phases of development. user research first part of every project should be a complex research about our end users. good interface has nothing to do with pictures or diagrams. for efficient interface it is vital to understand people – who they are, what they are like and what they really think. because every user is unique, we need to obtain enough information of an individual user and then extract generally true information about a group of users. we need to learn: • their goals • tasks they will be faced with in our product geoinformatics fce ctu 2011 133 voldan p.: developing web map application based on user centered design • their language and words (language they use during solving process) • their attitudes toward our product let’s think of this example. you develop one part of a mapping application that will be used for searching directions. let’s ask a simple question: why will users use your application? of course for searching directions but what will be the most important thing for them? is it the total distance or a travel time? is the user interested in the itinerary? are the names of cities important to the users? do users use connections between more point/cities? generally you can’t answer these questions without knowing your users. you can speculate what they want, or you can find out. here are some methods used in user research process [18]. 1. direct observation – the idea is to create interviews on-site for users because the best place for watching users is in their natural environment [6]. 2. surveys – we lose the advantage of a direct contact, data are not certain but usually we get significant numbers of answers – it’s possible to use a statistical approach. 3. others: focus groups, case studies – these methods can be useful too (though not so common as surveys or direct observation). the best approach in user research is a direct observation. this method provides precise information but on the other hand it is very time-consuming. conversely today’s on-line surveys are a low-cost, provide a variety of data but there’s no direct human contact and this can miss a lot of extra information. the author took an advantage of an on-line survey during developing 3d interface of gis software [19]. if you decide to use surveys keep in mind the following points: • before you publish your survey, do pilot test with your colleagues, co-workers. this clarifies understandability of survey’s questions and assures that there is no problem with filling the answers. • think about what you want to learn and how to create the question. • length of the questionnaire – practical aspect influences the quality of the data, because the research depends on cooperation with people who answer the questions. the filling of questionnaire should not be longer than 25 minutes [16]. for this reason you should perform the mentioned pilot test. personas one of the most dangerous practices in product development is isolating the designers from the users. ucd is about involving users into the process of developing. can you invite all of your users into your office and cooperate with them during the developing? i don’t think so but you can transform your users into personas. cooper [6] proposed an interesting interpretation of a persona. persona is a mask used in ancient greek play – persona is a social role of people in a specific context. there are some principles about personas: • persona is a fictional character who represents the needs of a whole range of our users [9] • persona has a name, age, characteristic as real people do geoinformatics fce ctu 2011 134 voldan p.: developing web map application based on user centered design • characteristic of persona is created on a based user research or it can be fiction • using personas helps the team focus on the users’ goals and needs [3]. personas increase usability of the developing application. a report from a user research can be very detailed and an extensive document. persona allows to extract the relevant information from a user research and offers these data in a different way. persona is usually represented by a small paper card that can everybody print off and keep in mind. let’s see a short example. from a user research we know that our users are mostly 25-34 old. they described themselves as comfortable using the web. one third of users visit some map portals only a few times per month. some of them have never displayed an aerial photo map on a web map portal. figure 2 shows sample persona that was created based on a user research. figure 2: persona the benefit of a persona is in their ability to filter personal ideas from the developing team. thinking in the ucd way is not natural. our more natural tendency is to be self-centered, which translates to taking an approach to product design based on our own wants and needs (even if we are not actually a user of the product) [7]. personas help to highlight important information from user research. detailed research is the basis of the entire project, but the amount of data can cause problems in the next phase of the project. reports of users and their needs are not always seen as relevant, and even if they are, the reports themselves are often cumbersome, tedious, and difficult to apply in the day-to-day development process [15]. geoinformatics fce ctu 2011 135 voldan p.: developing web map application based on user centered design creating persona does not consist of the fact that someone from the team creates a card with a photo and biographical data. the main information has to be obtained on the basis of a user research. personas derived from other sources than the empirical research are merely modeled from assumptions that have necessarily a limited point of view and experiences of their creators [8]. the whole development team should be involved in creating the personas if at all possible. this ensures the whole team will identify with the personas and secures their real usage of personas across the developing team. design "designing a good interface isn’t easy" – jenifer tidwell in this section we use information, obtained from analysis phases, for design of successful interface elements. the question is: what is a successful interface? for programmers it is an interface or a software that never breaks, but we can’t use this approach in terms of design. successful interfaces are those where users immediately notice the important stuff. unimportant stuff, on the other hand, doesn’t get noticed [9]. if the user can’t find a function, it doesn’t exist. the challenge in a successful interaction design is to find out which parts of interfaces are important and how users interact with them. wireframes the first step in a design phase should be creation of prototypes – in our case wireframes. wireframe is a skeleton or bare-bones interpretation of a page component, interface elements, navigation elements. wireframe can have many forms – from text interpretation to a very complex sketch. but the best and most widely used wireframes are very simple sketches. as personas, wireframes can involve the whole team in the process of a development – primarily people without programming knowledge (managers, designers). wireframes is the place where everybody can discuss or share their ideas. designers can mock up a visual interface and programmers understand the page features. coding the whole application takes time and money, conversely wireframes enable to experiment in the safety of a form which can be easily changed without much loss of time or wasted effort compared to re-programming software [4]. you can use an eraser on the drafting table or a sledge hammer on the construction site. – frank lloyd wright an example of a wireframe used during developing a simple web map application is shown in figure 3. for creating wireframes you can use sophisticated software (pencil, justinmind, mockingbird) or just a piece of paper and a pencil. don’t worry that the sketch will not be as detailed as the final product – a simple sketch works fine. very interesting is the fact that wireframes are also suitable for early usability testing. the process of testing is almost the same as with usability testing, which is described in more detailed in chapter "usability testing". geoinformatics fce ctu 2011 136 voldan p.: developing web map application based on user centered design figure 3: wireframe of web map application functional prototype in most cases functional prototype is a simple web page that may look as a final product, but without the full functionality. people want to postpone usability testing to a stage when an application is almost complete. they believe that testing is meaningful if the application works correctly. indeed the exact opposite is true. usability testing experts agree that serious problems in the application design can be detected in an early stage of a development [17] [14]. there is also the fact that large changes in the final stages of the product development are very complicated and expensive. functional prototype can be a compromise because we provide something that looks as the planned application but we still have the options to make changes easily. with today’s prototyping software it is very easy to prepare a functional prototype, that can be subsequently used for usability testing. the process of usability testing is described in chapter "usability testing". implementation usability testing usability testing is a method for evaluating our product with real users. if your goal is to develop easy to use product, you should perform several usability tests. you may forget mentioned phases of ucd but usability testing is (from authors point of view) the best way of how to find out how users will work with our map product. jakub spanihel ux designer at symbio said on prague ux camp in 2010: "don’t you test? – suicides!". according to [13] it is possible to split this method into two different types: a classical usability testing or a cheap user testing. classical way is complexly described in [17]. this method requires usability laboratory, specific selection of participants, it takes more time but gets better results. let’s describe the second – cheap method, which offers also quality results without spending money geoinformatics fce ctu 2011 137 voldan p.: developing web map application based on user centered design on hiring expensive equipment. number of participants finding the usability problem 1 50% 2 75% 3 87% 5 97% 6 98% 7 99% 8 99% table 1 – determining the correct number of usability test participants [5] 1. finding participants – the first step is about finding appropriate participants. you should look for specific group based on the aim of your application (cartographers, manager or general public). sometimes it is not possible to find correct participant – in this case you can invite your colleagues from different departments or friends but try to avoid this situation. send out the invitations two weeks before the planned usability test. you can motivate participants by a cash bonus – the author offered small gifts in the form of promotional items in his projects. there is a lot of discussion about numbers of participants. if you want to do usability test in a quick and cheap form, than 8 participants is enough. bob bailey’s [5] studied the correct number of usability participants. the results of his research are shown in tab 1. based on this research it is possible to say that only four participants can detect most user problems. usability testing is qualitative method not quantitative. it is more efficient to perform several tests with fewer participants during the development than one test with a lot of participants. "testing one user is 100 better then testing none." – steve krug 2. preparing task scenario– your goals and hypothesis for developing an interface are the basis for usability testing. think about what you want to learn and create the task for participants. the task scenario is a coherent story that follows or covers your tasks. you should create a real story – the participants should have the feel that they know the situation from their own lives. the example of a task scenario used in usability testing of web map portals is published in [20]. 3. prepare test room – a standard quiet office serves as a test room if you are performing cheap testing. the room should be equipped with all the necessary equipment (also refreshments) required to solve the defined tasks. don’t forget to erase personal preferences and history in web browser (before each of participants). a schema of a test room is shown on figure 4. a moderator is a person who communicates with the participants and goes with them through the task scenario. the main task of an observer is to note the important problems as well as the positive points. the observer can also be the programmer of a tested application – he can see the usability errors first hand. 4. test session – ensure that each test with particular participants will be the same. present the scenario to each participant before the start of the test and ensure them geoinformatics fce ctu 2011 138 voldan p.: developing web map application based on user centered design that the user is not being tested, but mapping sites. when the moderator introduces the observer it is often convenient to tell a lie. the moderator shouldn’t mention that someone in the room is involved in developing the applications. when the test starts, ask the participant to think aloud. thinking aloud is a technique where users speak out loud about their thoughts and feelings. together with user’s speak out loud use "why" method. ask them: why? why have you done this? please keep in mind that usability test shouldn’t be longer than sixty minutes. be patient, benevolent, dismiss the comments or ridicules. if you see that your participant is totally lost help him. 5. debriefing – after the execution of all tasks discuss with the participant and the observes the test conduct, any problems or feelings about the test. 6. report – analyze notes, identify user errors and difficulties. after just three conducted sessions you will see the repeated problems – the main errors. 7. next test – you should perform a new testing after repairing discovered problems. figure 4: scheme of the test room launch you could say that the launch is the final stage of the development. however, as is shown on figure 1 ucd is an iterative process. after application lunch you can use surveys to get user feedback, collect statistic, do some changes and then repeat usability testing. providing a great user experience is an ongoing process [2]. conclusion and discussion the paper describes usd in the simple form and phases. the objective of this article is not a full description of this method and it doesn’t attempt to be. for each chapter of this article it would be possible to find specific literature, which describes it in more detail, here is mentioned only a part of ucd. the author presents some methods of ucd that he used geoinformatics fce ctu 2011 139 voldan p.: developing web map application based on user centered design in previous projects focused on developing web-based map application. particularly usability testing is a complex method that can partially replace user research phases of ucd. unfortunately, foremost in the czech environment, the author still encounters a negative attitude towards usability technics. for this reason this article has been written and thus introduction of ucd in geoinformatics environment. by author’s opinion ucd process is what should geoinformatics developers be interested in. the need for development usability techniques is higher in development mapping application field than in any other area. developers should think about final users and guarantee the usability efficiency and effectiveness of map applications. references 1. human-centered design for interactive systems, international organisation for standardisation, geneva, switzerland, 1999. 2. the benefits of user centred design, 2011, online at http://www.usabilitynet.org/trump/methods/integration/benefits.htm 3. develop persona, 2011, online at http://www.usability.gov/methods/analyze_current/personas.html. 4. arnowitz, j.; arent, m. n. effective prototyping for software makers. the morgan kaufmann series in interactive technologies. elsevier, 2007, isbn 978-0-12-0885688. 5. bailey, b. determining the correct number of usability test participants, 2011, online at http://www.usability.gov/articles/newsletter/pubs/092006news.html. 6. colborne, g. simple and usable: web, mobile, and interaction design. voices that matter series. new riders, 2010, isbn 978-0-321-70354-5. 7. cooper, a.; reimann, r. c. d. about face 3: the essentials of interaction design. wiley pub., 2007, isbn 978-0-470-08411-3. 8. franc, j. diskuse kritických ohlasů na adresu person. in uživatelsky přívětivá rozhraní, 1.vyd. horava & associates, praha, 2009, 180 s. 9. garrett, j.-j. the elements of user experience: user-centered design for the web and beyond, second edition, new riders, 2011, isbn 978-0-321-68368-7. 10. haklay, m.; zafiri, a. usability engineering for gis: learning from a screenshot. the cartographic journal 47 (2008), pp 87–97. 11. karat, c.-m. cost-benefit analysis of usability engineering techniques. human factors and ergonomics society annual meeting proceedings (1991), pp 839–845. 12. kraak, m.-j.; brown, a. web cartography—developments and prospects. geographic information systems workshop. taylor & francis, 2001, isbn 9780748408696. 13. krug, s. webdesign nenuťte uživatele přemýšlet, 2. vyd., voices that matter. 2006, isbn 80-254-1291-8. geoinformatics fce ctu 2011 140 http://www.usabilitynet.org/trump/methods/integration/benefits.htm http://www.usability.gov/methods/analyze_current/personas.html http://www.usability.gov/articles/newsletter/pubs/092006news.html voldan p.: developing web map application based on user centered design 14. krug, s. nenuťte uživatele přemýšlet! praktický průvodce testováním a opravou chyb použitelnosti webu. computer press, 2010, isbn 978-80-251-2923-4. 15. pruitt, j.; adlin, t. the persona lifecycle: keeping people in mind throughout product design. the morgan kaufmann series in interactive technologies. elsevier, 2006, isbn 978-0-12-566251-2 . 16. punch, k.-f. základy kvantitativního šetření, 1. vyd. portál, praha, 2008, isbn 978-80-7367-381-9. 17. rubin, j.; chisnell, d. handbook of usability testing: how to plan, design, and conduct effective tests, second edition ed. wiley publishing, inc., indianapolis, 2008, isbn 978-0-470-18548-3. 18. tidwell, j. designing interfaces. o’reilly series. o’reilly media, 2010, isbn 9780-59-600803-1. 19. voldan, p. research grass wxgui 3d, 2010, online at http://edu.surveygizmo.com/s3/392936/research-grass-wxgui-3d. 20. voldan, p. usability testing of web mapping portals. geoinformatics fce ctu 5 (2010), pp 57–65. 21. you, m.; chen, c.-w. l. h. l. h. a usability evaluation of web map zoom and pan functions. international journal of design 1 (2007), pp 15–25. geoinformatics fce ctu 2011 141 http://edu.surveygizmo.com/s3/392936/research-grass-wxgui-3d from discovery to impact near earth asteroids miloš tichý1,2, michaela honková1,3, jana tichá1, michal kočer1 1kleť observatory, zátkovo nabřeží 4, cz-370 01 české budějovice south bohemia, czech republic 2czech technical university in prague, faculty of civil engineering, department of advanced geodesy, czech republic 3brno university of technology, faculty of mechanical engineering, institute of mathematics, czech republic mtichy@klet.cz abstract the near-earth objects (neos) are the most important of the small bodies of the solar system, having the capability of close approaches to the earth and the chance to collide with the earth. we present here the current system of discovery of these dangerous objects, standards for selecting useful and important targets for neo follow-up astrometry, system of impact probabilities calculations, and also determination of impact site and evacuation area. keywords: asteroid, near earth object, astrometry, impact probability 1. introduction various kinds of small solar system bodies orbit the sun. minor planets and comets are significant members of these solar system small bodies population. signs of catastrophic collisions between the small bodies and the earth can be seen on the earth’s surface. therefore to avoid further collisions it is necessary to study near earth objects, i.e. small solar system bodies, whose orbit crosses the orbit of the earth. the near earth objects (neos) are the closest small neighbours of the earth. the neo research is a quickly expanding field of astronomy, important both for solar system science and for protecting human society from minor planets and comets hazard. near earth objects are sources of impact risk and represent usually a low-probability but potentially a very highconsequence natural hazard. studies of neos moreover contribute importantly to our overall understanding of solar system, its origin and evolution. by definition, near earth objects (neos) are minor planets (asteroids) and comets with perihelion distance less than 1.3 astronomical unit (au). the vast majority of neos are asteroids, referred to as near-earth asteroids (neas). neas are divided into four groups (inner earth object /ieo/ or atira, aten, apollo, amor) according to their perihelion distance, aphelion distance and their semi-major axes. the nea groups are named according to the significant representant of the group (2062) aten discovered january 7, 1976; (1862) apollo discovered in 1932, but then it was lost until 1973; (1221) amor discovered also in 1932. there are currently nearly 9000 known neas (2012 july) [1]. geoinformatics fce ctu 8, 2012 73 tichý, m. et al.: from discovery to impact near earth asteroids the closer passing larger neas are called potentially hazardous asteroids (phas). phas are neas whose minimum orbit intersection distance (moid) with the earth is 0.05 au or less and whose absolute magnitude (h) is 22.0 or brighter, correlating with estimated diameter exceeding 140 meters. there are currently more than 1300 known phas (2012 july). however, the most important sub-category of near earth objects are so-called virtual impactors (vis) [2]. virtual impactors are objects for which possible impact solutions, nonzero probability of collision for the next 100 years, exist. probabilities are calculated from observed positions. as new positional observations become available, the object’s orbit is improved, uncertainties are reduced, and the impact solutions are most likely ruled out for the future 100 years, the object being eventually removed from the virtual impactors list. 2. neo inventory the first task of neo research is to make inventory of the near earth objects population, therefore the main task of the current near earth objects surveys is to contribute to the inventory of population of neos, and more specifically, potentially hazardous asteroids (phas) and comets that may pose a threat of impact and thus harm to civilization. among the most prolific neo surveys belong the catalina sky survey, pan-starrs, linear and spacewatch. after the actual discovery the measured positions are immediately sent to the minor planet center (mpc) of the international astronomical union (iau). candidates of newly discovered near earth objects are then published on the neo confirmation page of mpc [3]. using the service of mpc the observers are able to calculate ephemerides for these bodies (i.e. usualy as a table of values that gives the positions of astronomical object in the sky at a given time as if observed from a particular point on the earth’s surface topocentric positions of the object on the sky in given time interval) including uncertainty plots for followup observations. further observations of discovered objects are crucial necessary for reliable orbit calculation. there is a number of cooperating follow-up observers all over the world targeting neos in need of orbital improvement. follow-up facilities in europe are improving their capabilities and providing critical longitudinal coverage. in particular, european followup stations often target neos discovered the previous night from the south-western us [4] . this is the work to which the klet observatory in south bohemia greatly contributes to and the 1.06-m klenot telescope at kleť is still the largest telescope in europe used exclusively for neo follow-up [5]. if the reliable orbit is calculated, information of the newly discovered near earth object is published including all used astrometric observations and orbital elements in minor planet electronic circular (mpec) and impact probability is immediately determined by both jpl sentry system (nasa neo office) [6,9] and neodys system (university in pisa) [7]. if the nonzero probability of impact for the next 100 years is calculated, then it is necessary to acquire additional information about orbit of the object and to determine its physical and chemical characteristics. furthermore, an object may come close to the earth repeatedly on its orbit around the sun. every approach to the planet changes its orbit and may lead to future impact to the earth. possible orbits leading to impact solutions in the future can be determined, outlining a ’keyhole’ area of the sky. if the asteroid doesn’t pass through the keyhole, we know it will miss the earth on its future encounter. if the probability of impact steeply rises with newly obtained observations instead of dropping down to zero, is it necessary to activate methods of defence. geoinformatics fce ctu 8, 2012 74 tichý, m. et al.: from discovery to impact near earth asteroids figure 1: known near-earth asteroids 1980-january through 2012 march the figure 1 shows the cumulative total known near-earth asteroids versus time. the white area shows all near-earth asteroids while the black area shows only large near-earth asteroids (those with diameters roughly one kilometer and larger) [8]. 3. scenario of impact to the earth the asteroid or comet impact to the earth has four phases. in the first phase the object travels in the space close to the earth. probability of the impact is rising to 1. in the second phase the object enters the earth’s atmosphere. impact area is determined a few days ahead and was already to be evacuated. the third phase is the impact to the earth itself. a crater is created and ejecta is flung around. in the last fourth phase shock waves propagate the earth from the impact area. energy released and therefore the crater diameter depends on the mass of the body and its velocity. approximating the mass by object’s diameter for an average asteroid density, and assuming common velocity, a 10 m body has impact energy about 0.06 mega tonnes (mt) tnt and is capable of local destruction with crater diameter about 300 meters. larger 100 m body has at the time of impact an energy of 75 mt tnt and creates crater 2 kilometres in diameter. that’s comparable with the strongest ever detonated nuclear weapon the tsar bomb which had about 50 mt tnt. one kilometer body would cause an explosion equal to 75 thousands mt tnt and 11 km crater with regional to global catastrophe. energy of impact for a 10 km body would be 75 millions mt tnt with a crater 70 km in diameter and inevitable global catastrophe. 4. methods of defence neo impact is the only natural hazard which our civilization can predict and could avoid entirely. deflection of such a body is feasible most effective is changing its speed along its path. disruption following with dispersion are also possible. finally, complete destruction of the object could also be an option. deflection is the safest method but requires a warning geoinformatics fce ctu 8, 2012 75 tichý, m. et al.: from discovery to impact near earth asteroids time of a few years to sufficiently change the objects orbit. the second method is disruption following with dispersion, but the asteroid broken into several pieces may just exacerbate the problem. the third option is complete destruction. that requires reliable knowledge regarding its internal structure, rotation, composition etc. strong enough explosives may not be easily available to destroy a 100 m object an energy more than 30 mt tnt is required, therefore conventional explosives are out of question, and there could arise troubles with acceptability and space transportation of nuclear weapons. let’s look in detail on deflection methods in the further text. every scenario can be carried out using either non-nuclear or nuclear methods. considering non-nuclear methods, the rotational speed of the object must be taken into account. for fast rotating bodies explosives (standoff, surface or subsurface explosions) and kinetic impactor (similar to impactor probe of comet tempel 1) can be used. for slowly rotating objects following methods could be considered: focused solar radiation, ion beam, gravity tractor and space tug. the gravity tractor is a spacecraft which relies on the force of gravity between the target asteroid and a spacecraft hovering in close proximity or orbit near it to gradually modify the asteroids orbit. as it uses only gravity force the mechanical composition and structure of the asteroid is not needed to be known. the space tug is similar to the gravity tractor but the spacecraft is physically connected to the asteroid and therefore prior knowledge of surface characteristics would be necessary. for effective deflection both gravity tractor and space tug require long periods of time. nuclear methods vary for fast and slow rotators as well. nuclear explosions (standoff, surface or subsurface) would be effective against fast rotators. for slow rotators gravity tractor or space tug could be used. however use of a nuclear system in space could be a political problem. use of military technology would be required to save humankind on the earth, although public could react poorly to it, and a possibility of failure during launch resulting in the bomb exploding on the earth cannot be rules out [10]. why is it necessary to have more than one method? asteroids differ from each other, having different density, size, shape, rotation, some are binary systems etc. and would be discovered with different warning time for an intervention. for each asteroid a different method could be the key to save the earth [11]. 5. conclusion we presented in this paper the basic parts of the system of principal discovery of potentially dangerous near earth objects, standards for selecting important targets for neo follow-up astrometry [12,13], system of impact probabilities calculations, and also methods of defense against their devastating impacts. there are two basic approaches to protect humankind from the neo hazard. the first one is a deflection. it is the only responsible way to prevent global disaster to happen. a smaller asteroid discovered shortly before its impact is much more likely scenario to happen than a large asteroid we would know of years ahead, therefore deflection implementation is much less likely than the need to evacuate. advance decisions must be made before the need to deflect arises. deflecting an object in space is expensive, especially if an uncertainty in geoinformatics fce ctu 8, 2012 76 tichý, m. et al.: from discovery to impact near earth asteroids its orbit is counted in, resulting in necessity of an additional energy required for successful deflection. the second approach is an evacuation. it is by far the likely procedure for a last moment response. it is less expensive and more familiar: all usual civil defence emergency procedures can be applied. evacuation requires rapid, reliable communication between astronomers and officials, and cooperation with emergency managers [14]. is a collision of asteroid with the earth just a science-fiction? it is not. we know about many older impact craters on the earth’s surface. for example crater ries in bavaria, barringer crater in arizona or tunguska event in 1908. anyway, the asteroid impact already happened in recent times. the earth collided with the asteroid 2008 tc3 on 2008 october 7. this object was only 3 meters in diameter and impacted the nubian desert of northern sudan. it was the first and so far the only time the asteroid was observed while still orbiting the sun, then as a bolide in the earths atmosphere and finally fragments of the body were found after the impact. so the first part of the neo "warning" system was already successfully tested [15]. finally, it is important to say that dealing with the neo hazard is not a task just for the international scientific community, but it is also the matter for international community as a whole, i. e. governments, relevant decision makers as well as the un and other relevant international and inter-governmental bodies. the nature and consequences of the threat posed by near earth asteroid and comet impacts are global and long-term, therefore mitigation efforts will also require coordinated international actions. acknowledgement: the work of klet observatory and the klenot project is funded by the south bohemian regional authority. references [1] minor planet center of the iau at http://www.minorplanetcenter.net/ [2] a. milani, s.r. chesley, a. sansaturio, g. tommei, and g. valsecchi, “nonlinear impact monitoring: line of variation searches for impactors,” icarus, vol. 173, pp. 362-384, 2005. [3] neocp at http://www.cfa.harvard.edu/iau/neo/toconfirmra.html or http:// www.minorplanetcenter.net/iau/neo/toconfirmra.html [4] s. larson current neo surveys, in: near earth objects, our celestial neighbors: opportunity and risk, proc. of iau symp. 236. 2007. p. 323 – 328. [5] j. ticha, m. tichy, m. kocer, and m. honkova, “klenot project 2002-2008,” meteoritics & planetary science, vol. 44, issue 12, pp. 1889-1895, 2009. [6] sentry at http://neo.jpl.nasa.gov/risk/ [7] neodys at http://newton.dm.unipi.it/neodys/ [8] nasa neo office at http://neo.jpl.nasa.gov/ [9] a.b. chamberlin, s.r. chesley, p.w. chodas, j.d. giorgini, m.s. keesey, r.n. wimberly, and d.k. yeomans, “sentry: an automated close approach monitoring system for near-earth objects,” bull. amer. astron. soc., vol. 33, pp. 1116, 2001. geoinformatics fce ctu 8, 2012 77 http://www.minorplanetcenter.net/ http://www.cfa.harvard.edu/iau/neo/toconfirmra.html http://www.minorplanetcenter.net/iau/neo/toconfirmra.html http://www.minorplanetcenter.net/iau/neo/toconfirmra.html http://neo.jpl.nasa.gov/risk/ http://newton.dm.unipi.it/neodys/ http://neo.jpl.nasa.gov/ tichý, m. et al.: from discovery to impact near earth asteroids [10] p. cinzano et al, europe mon. not. r. astron. soc. 328, pp. 689-707 (2001) [11] 2011 iaa planetary defence conference, bucharest at http://www.pdc2011.org/ [12] j. ticha, m. tichy and m. kocer, the recovery as an important part of neo astrometric follow-up, icarus,vol.159, no. 2, october 2002, pp. 351-357. [13] d. koschny, m. busch, g. drolshagen, “asteroid observations at the optical ground station in 2010 lessons learnt”, 2011 iaa planetary defense conference, bucharest, 2011 [14] ase : asteroid threats: a call for global response, report to at-14 at un-copuos 2008 (www.space-explorers.org/committees/neo/docs/atacgr.pdf) [15] statement of the iau presented to copuos at the 49th session of the scientific and technical subcommittee, vienna, austria, 6-17 february 2012. by karel a. van der hucht, iau representative to copuos, sron-utrecht, the netherlands geoinformatics fce ctu 8, 2012 78 http://www.pdc2011.org/ www.space-explorers.org/committees/neo/docs/atacgr.pdf ________________________________________________________________________________ geoinformatics ctu fce 2011 25 the concept of “sala de fabrica”1: on-site museums to raise awareness of cultural heritage after a restoration project ana almagro vidal1, teresa blanco torres1, gabriel morate martín1 1programa de conservación del patrimonio histórico español, fundación caja madrid plaza de san martín, 1 madrid (spain) aalmagro@cajamadrid.es, tblancot@cajamadrid.es, gmoratem@cajamadrid.es keywords: restoration project, documentation, communication, museum, visualization of cultural heritage abstract: a conservation process usually generates new knowledge and an enormous amount of documentation during the inception and implementation of the project: the information collected from archives and other institutions; the information provided by the preliminary studies carried out prior to the intervention; the data provided in the field during the works and at the end of the process; and the final set of documentation delivered to the institution responsible for the maintenance and management of the monument. the challenge for conservation professionals and cultural heritage managers throughout this process once the works are over is to achieve and transmit this information to the public and specialists in order to raise awareness for better conservation of our built heritage. during the last few years, one of the actions that the caja madrid foundation has activated with its restoration projects has been the opening of permanent on site museums or “salas de fábrica”, a place on site to understand the restoration works, to exhibit the remains that have being retrieved during the project and to permit the public to better understand the historical and artistic values of architectural and archaeological herita ge as well as the importance of preserving our cultural legacy for the future. 1. communication and dissemination as part of a restoration project traditionally, intellectual, technical and financial efforts in heritage conservation have always, except for a few exceptions, focused on the restoration of the monument. in these projects communication and dissemination actions were considered as external activities apart from the restoration project and/or in any case, after the restoration process. this situation has not taken advantage of the enormous capacity of knowledge, public awareness, and enjoyment that a restoration project can offer. the development of a restoration project is a knowledge process and continuous learning that in most cases implies a critical review and deeper understanding of what had been known up to the present. the transmission of this knowledge to society, especially that community of which the monument forms a part, is of great importance for the appreciation and conservation of the monument. plus, the restoration is a perfect time to understand patrimony‟s documentary value and the fact that it can be constantly re-interpreted, something that will not end with the intervention. cultural dissemination of a restoration project should clearly show the scientific, technical and financial complexity of restoration projects in general. this is key with regards to a more participative and critical consciousness about the actions undertaken with respect to our historical heritage. it is understood that raising society‟s awareness about the problems affecting the preservation of heritage and about the capacity to solve them promotes attitudes of greater responsibility. the comprehensive preservation of a monument implies taking into account not only its material aspect but also its future management and use for tourism. it is clear, therefore, that from the beginning the restoration 1 it is important to mention the authors‟ difficulties when trying to translate the concept of “sala de fábrica” into english. it was finally decided to leave it in spanish so as to keep the concept intact. even if the first experiences creating “salas de fábrica” are mentioned throughout the paper as on-site museums it is important to highlight that they are, in fact, not museums. the official and widely understood definition of a museum is an institution with logistic and management support needed for maintainance; while a “sala de fábrica”, once it is opened to the public, does not need anything but the monument itself. it can be explained without any external help and is completely self-sustainable. ________________________________________________________________________________ geoinformatics ctu fce 2011 26 project should involve society and managers, with a sense of their responsibility after the works in terms of guaranteeing the feasibility and sustainability of public access to the monument, not only during the works but also once these are over. to this end, it is fundamental in restoration projects promoted by the caja madrid foundation, to consider communication and dissemination plans as one important chapter of the intervention, being always defined and developed according to the magnitude of the intervention and the characteristics of the monument. these plans entail a series of actions carried out during the restoration work and once the project concludes, aimed to strengthen the monument‟s cultural value and significance. along with dissemination activities such as on-site communication, didactic workshops for young children and monthly video diffusion of the restoration progress on the internet, one of the most significant actions of these plans is being directed toward the installation of on-site museums, as a specific space inside the monument to promote the public familiarity with heritage sites and raise awareness about the importance of preserving them after these projects are finished. 2. first experiences with the on-site museums since the creation of the spanish historic heritage conservation department at the caja madrid foundation in 1996, the first actions that were promoted were to simply inform the public through the installation of graphic panels on site about the monument values and its significance as well as the ongoing restoration works, in order to raise awareness about the problems and efforts that heritage conservation requires [1]. when circumstances were appropriate, the actions turned to prepare a small space in the restored monument so as to permanently exhibit the most relevant documentation produced during the intervention, mainly the results of the historical and documentary research as well as the works or archaeological excavations carried out in the framework of the project, aimed to implement the exhaustive knowledge of the building. this is the case of the project at santa cueva in cádiz where, during the restoration of the crypt, liturgical items of high artistic value were retrieved. it was decided to restore and exhibit them in a space available above the crypt and explain the complexity of the space, something that had never been done before. the next case was at the convent of san agustín in talavera de la reina, where a small space was dedicated to life and work of fray lorenzo de san nicolás, the convent architect, and distinguished essayist of the 17th century. finally, at the church of santa catalina in valencia, tombstones and a sculpture of a bishop‟s head were discovered inserted in the fabric of the façade, in a medieval arcosolio (a burial niche). these were then displayed on site with explanations concerning their discovery and historic context (figure 1). figure 1. first experiences in santa cueva (cádiz), san agustín (talavera de la reina) and santa catalina (valencia) it was decided during these restoration projects to take advantage of three fortuitous circumstances, not predictable a priori: lack of space or elements dedicated to explaining the significance and importance of the monument. abandoned pieces of great artistic value, some already existing and some retrieved during the restoration project. ________________________________________________________________________________ geoinformatics ctu fce 2011 27 existence of an available space where, with the property permission, it was possible to exhibit and explain the objects, the monument and the importance of preserving them. the success of these first experiences demonstrates the usefulness of permanent on-site dissemination. this permits visitors to have a better understanding of the monument and the conservation process. 3. the cultural restoration projects since these experiences, increasing importance has been given to the social and economic dimension of projects and their dissemination. these projects, promoted by the caja madrid foundation, are now defined as “cultural restoration projects”. they seek a qualitative and quantitative advancement in heritage conservation and aim to: strengthen the values people hold for the monument, financial and visitor management, and interpretation and communication. enrich knowledge about the monument with the addition of economic feasibility studies, territory and tourism analysis, museum studies, etc. involve from the beginning of the project multidisciplinary teams that include geographers, sociologists, museum specialists, economists, etc. use more sophisticated dissemination and technical resources for communication balance the high initial cost of these projects with the economic, cultural, and social benefits over the long term. as mentioned previously one of the actions launched in these cultural restoration projects, apart from the physical restoration of the monument, is to promote the opening of on-site museums. these spaces emphasize additional values of the monument, the geographical and historical context and enable the public to better understand the significance of the building and the importance of preserving these values as part of the legacy of the site. furthermore, this action aims at the improvement of the monument‟s cultural and visitor management. from the first experiences beginning with onsite displays and documentation of the restoration works to the more elaborated on-site museum the foundation aims to facilitate and improve the access for the public to cultural heritage, both intellectual and physically. at the same time, this methodology contributes to a model that serves as an example for others, implementing and promoting a new approach to cultural and tourism management. the caja madrid foundation has developed the concept of on-site museums through many projects. each project has been observed and improved. this has strengthened public outreach and is being supported with the implementation of technical means that permit more sophisticated applications to improve the perception and the learning process of the public. 4.the concept of “sala de fábrica”. the case of san millán de yuso following this evolution, in 2005 the caja madrid foundation, the regional government of la rioja and the religious community of the augustinian recollects signed an agreement to promote the cultural restoration project of the church of the 16th century monastery of san millán de yuso (la rioja) that was included on the world heritage list in 1997 as birthplace of the spanish language. as in every project of the caja madrid foundation, an ambitious communication and dissemination plan was launched as part of the restoration project, aimed to introduce visitors and residents of the region to the concept of cultural heritage during the works and the importance of preserving the values and significance of this world heritage site [2]. during the work earlier romanesque traces –building elements and foundations– were discovered. this extraordinary discovery confirmed the existence of the medieval church that until unearthed the only evidence was a few references in historical documents. when brought to light this discovery altered the project, but at the same time, gave an extraordinary opportunity to the multidisciplinary team to learn and study the origins and causes of the inception and development of this monastery since the 13th century, and share their findings with the public (figure 2). in addition, the historical and artistic research carried out during the project revealed the lack of context of a large amount of movable items belonging to the church, as well as the obsolescence of the explanations given by the guides that lead the public visits. also, the integral restoration of the church, especially concerning the baroque chapels decorated with frescoes, paintings and retablos strengthen the richness of a cultural heritage that had never been opened to public before. once the works were over, all these historic documents, studies, plans, photographs, surveys and remains from the excavations and results from the restorations works were carefully analyzed, selected and edited, in order to tell the history of the monastery of san millán de yuso, its origins and evolution throughout the centuries and how this process has defined the site that the visitor observes today. for the caja madrid foundation it was crucial to display and explain all this information on site, where the real context of the information is all around and the documentation can be easily identified and related to the site. with these premises a ________________________________________________________________________________ geoinformatics ctu fce 2011 28 “sala de fábrica” was installed in the monastery of san millán using some of the chapels inside the church. in the framework of the cultural restoration project this permanent on-site museum aimed to: permit the access of the public to the ensemble of chapels next to the presbytery, with their function redefined after the restoration. the most important one, the relics chapel, has recovered the collection of reliquary heads that were previously exhibited in a different space of the monastery, completely undervalued and out of context2 (figure 3). improve the presentation of the rest of the church making the cultural and religious uses compatible. in this regard, new signs were placed in front of all the retablos, updating their content to the new information retrieved during the investigation. figure 2. discovery of the romanesque church foundations in yuso during the archaeological excavation figure 3. the reliquary heads collection exhibited out of context and back to their original location open some of the restored chapels with no use into useful spaces that explain the historical and constructive evolution of the church and the monastery. all the archaeological and historical information that was 2 these reliquary heads as well as other movable objects had been placed out of context in spaces next to the church. once these pieces were replaced in their original locations the space had to be restored. this is were the explanation of the medieval church was placed and the “sala de fábrica“ concept was adopted for san millán. ________________________________________________________________________________ geoinformatics ctu fce 2011 29 collected by the professionals and experts during the 6-year project presented new discoveries and points of view about the constructive, artistic, religious and cultural development of the site (figure 4). figure 4. detail of the “sala de fábrica” in san millán de yuso, emphasize the importance of the archaeological discovery and give the opportunity to exhibit the remains found during the archaeological excavations that help to better understand the history of the monastery (figure 5). during implementation, the information that had to be displayed was analyzed, adapted and edited to permit dissemination at different levels (scientific, general public) with exhibition resources adapted to the features of the monument. for this purpose hybrid products were the most useful for communication. photogrammetric representations, 3d models, virtual visualization of the building geometry and multimedia applications permitted interactivity with the user. these products have demonstrated their usefulness to achieve this purpose, together with the monument itself, as well as the use of simple panels to display general information. figure 5. exhibition of the archaeological remains in the “sala de fábrica” ________________________________________________________________________________ geoinformatics ctu fce 2011 30 apart from these products specially adapted for general public, all the technical documentation regarding the project is also displayed in special applications available on site. these applications permit more specific searches and consultations by professionals and experts or even public that request more detailed information about the work. from the preliminary studies developed during the investigation phase and the final technical reports of the restoration project to video productions and many kinds of graphic material –surveys, cartographies, analysis, 3d models –produced during the investigation and during the intervention that brought together a wide spectrum of specialists (figure 6). figure 6. photogrammetric survey of the fray benito de salazar‟s stone retablo and its pathologies map contemporary to the implementation of the “sala de fábrica” and in the frame of the cultural restoration project at large, an issue that has become crucial in this kind of project is the management model. this entails bringing together different institutions to work jointly, with interdisciplinary coordination between different professionals and integrating these new spaces into the previous visitor patterns of the monument. last but not least, is the estimation of the cost for the institution responsible for the management of such an initiative in terms of employees, technical assistance, maintenance of hardware and software and general services to keep open to public this small but permanent infrastructure. in san milan, the opening of this small on-site museum has obliged local managers and the religious community of the augustinian recollects to redefine certain aspects of their usual management of the monastery, but the effort has already given very good results. 5. a new “sala de fábrica” coming soon for the façade of the cathedral of pamplona based on the positive experience at the monastery of san millán de yuso, the caja madrid foundation is now fully immersed in the preparation, coordination and management of a new “sala de fábrica” in the cathedral of pamplona (navarra) that will open in september 2011. the project carried out over the last 4 years by the archbishopric of pamplona, the regional government of navarra, the town hall of pamplona, and the caja madrid foundation, entailed ________________________________________________________________________________ geoinformatics ctu fce 2011 31 the restoration of the 19th century neoclassical façade [3] as well as the ensemble of eleven bells in the towers (figure 7). figure 7. the façade of the cathedral of pamplona in addition to these two main actions, the incorporation of a “sala de fábrica” was considered essential from the beginning. the intention was to recover the “bell ringer house” as a permanent on-site museum containing a documentation and dissemination programme similar to the one developed in san millán de yuso but enhancing certain aspects such as the importance of intangible heritage [4, 5]. this has provided interesting and challenging issues to document and transmit to the public, such as the exciting and unknown world of the bells and their ringers as well as their functional and cultural significance. also, it was considered as a key issue to promote the maintenance and enhancement of this alive and sonorous cultural heritage that still surrounds everyday life in villages, towns and, less commonly, big cities [6]. another important intangible aspect that was under study to be included in the on-site museum was the explanation and display of public discourse about the façade and its renovation in the 19th century. this renovation divided the community between those defending the renowned architect ventura rodríguez (fine arts royal academy of san fernando in madrid) and those who considered the façade as an assault attached to the existing gothic building3. this division has continued until the present day. it is a goal of the “sala de fábrica” to capture and document this intangible aspect and to transmit the importance and relevance of this masterpiece of the neoclassical period in spain. in addition to the intangible aspects, another important physical issue in this project has been the restoration of ventura rodríguez‟s original plan traced back to the 1řth century as well as other drawings related to the façade. all these paper and parchment materials that were abandoned and forgotten in the cathedral archive have been brought to light, restored and now ready to be exhibited appropriately in this on-site museum. not only have the drawings been conserved but also the construction process has been given special attention in the “sala de fábrica”: the explanation and exhibition of materials used, artifacts and engineering techniques, and operations carried out during the 19th century work. a 3d computer model was created that explains the 18 year façade construction process. the architect‟s original worksite books were analyzed and used as the basis for this model. finally, like in the case of san millán de yuso, all the information from the preliminary studies, project documents, photographic materials and final reports were included in the “sala de fábrica”. objects found during the works as well as pieces that had to be replaced 3 the famous author victor hugo wrote about the façade, calling the towers “donkey ears” [7]. ________________________________________________________________________________ geoinformatics ctu fce 2011 32 because of their poor condition will also be exhibited. all of this will be explained through text, images, video and audio through multimedia applications permitting the users to deepen their knowledge figure ř. axonometric model with the “sala de fábrica” distributed along the “bell ringer house” with this “sala de fábrica” the cathedral now has almost 300m2 of exhibit space dedicated to the façade and its tangible and intangible values, which have never been emphasized before. the archbishopric was provided with a simple management plan to operate the “sala de fábrica”, permit visits to both the façade and cathedral and employ the bells for liturgical use (figure 8). 6. conclusions the creation of a “sala de fábrica” entails an important contribution to the management of tourism in cultural heritage sites but depends on individual circumstances, project limitations and budget constraints. the implementation of these “salas de fábrica” is an action not always foreseen in restoration projects as not all projects are meant to have such a space. but in any case, these spaces, in conclusion, help to: enrich the cultural visit to the monument with the incorporation of new spaces that emphasize tangible and intangible values of the monument4. in these terms the public visit increases social and economic benefits to the monument and, consequently, to the city. permits an updated interpretation of the monument on-site based on the multidisciplinary studies and projects carried out and the remains discovered. offers a space to consult all the documentation regarding the cultural restoration project. 4 this premise is included in the spanish constitution, article 44.1 [8]:„los poderes públicos promoverán y tutelarán el acceso a la cultura, a la que todos tienen derecho‟: the right of every person to access culture, not only physically but also intellectually. in these terms, cultural heritage will always require intermediation and explanation. ________________________________________________________________________________ geoinformatics ctu fce 2011 33 6. references [1] tomaszewski, a., simone, g. (eds.): the image of heritage. changing perception, permanent responsibilities. edizioni polistampa, firenze 2011. [2] morate, g., almagro, a., blanco, t., andonegui, m. (2008): the monastery of san millán de yuso (spain): transmitting the importance of preserving the significance of a world heritage site. proceedings of the 16th icomos general assembly and international symposium: „finding the spirit of place – between the tangible and the intangible‟, sept–oct 2008, quebec, canada. http://www.international.icomos.org/quebec2008/cd/toindex/77_pdf/77-1ffb-282.pdf 2011-08-15. [3] lorda, j.: fachada de la catedral de pamplona: sus temas compositivos, cuadernos de la cátedra de patrimonio y arte navarro (2006), núm.1, pp. ř3-107. [4] blake, j.: unesco´s 2003 convention on intangible cultural heritage. the implications of community involvement in safeguarding. intangible heritage key issues in cultural heritage, (2009). pp 45-50. [5] patrimonio cultural inmaterial el texto de la convención para la salvaguardia unesco (2006) http://www.unesco.org/culture/ich/index.php?lg=es&pg=00006, 2011-08-15. [6] llop i bayo, f.: campanas, campanarios y toques: la recuperación de un sonido perdido pp 44-51. iccrom, 2003. [7] jusué simonena, c.: visión de viajeros sobre la catedral de pamplona p. 483. http://dspace.unav.es/dspace/bitstream/10171/4136/1/visi%c3%b3n%20de%20viajeros%20sobre%20la%20catedral% 20de%20pamplona%20libro%20catedral%201-16.pdf, 2011-08-15. [8] constitución española 1ř7ř: http://noticias.juridicas.com/base_datos/admin/constitucion.t1.html, 2011-08-15. original research at the archives of pamplona and archives of the monastery of yuso by the project teams. unpublished. http://www.international.icomos.org/quebec2008/cd/toindex/77_pdf/77-1ffb-282.pdf http://www.unesco.org/culture/ich/index.php?lg=es&pg=00006 http://dspace.unav.es/dspace/bitstream/10171/4136/1/visi%c3%b3n%20de%20viajeros%20sobre%20la%20catedral%20de%20pamplona%20libro%20catedral%201-16.pdf http://dspace.unav.es/dspace/bitstream/10171/4136/1/visi%c3%b3n%20de%20viajeros%20sobre%20la%20catedral%20de%20pamplona%20libro%20catedral%201-16.pdf http://noticias.juridicas.com/base_datos/admin/constitucion.t1.html extension of mathematical background for nearest neighbour analysis in three-dimensional space eva stopková phd. student of geodesy and cartography department of theoretical geodesy faculty of civil engineering slovak university of technology in bratislava abstract proceeding deals with development and testing of the module for grass gis [1], based on nearest neighbour analysis. this method can be useful for assessing whether points located in area of interest are distributed randomly, in clusters or separately. the main principle of the method consists of comparing observed average distance between the nearest neighbours ra to average distance between the nearest neighbours re that is expected in case of randomly distributed points. the result should be statistically tested. the method for twoor three-dimensional space differs in way how to compute re. proceeding also describes extension of mathematical background deriving standard deviation of re, needed in statistical test of analysis result. as disposition of phenomena (e.g. distribution of birds’ nests or plant species) and test results suggest, anisotropic function would represent relationships between points in three-dimensional space better than isotropic function that was used in this work. keywords: 3d gis, spatial analysis, nearest neighbour analysis 1. introduction the purpose of this work is to outline the way how to implement nearest neighbour analysis (nna) in three-dimensional space into the geographical information systems (gis) environment. at first, the article summarizes derivation of mathematical background in case of isotropic phenomenon. in the next part, there is described the module of open source software grass gis [1] that was developed on the base of these relationships. finally, the results of moduleś tests are analysed. nna helps to assess whether points located in tested area are distributed randomly, in clusters or separately. it can be useful in biology to monitor behaviour of plant or animal populations [2], in stellar statistics, in chemistry to analyse atomic structures, etc. in gis, nna may be helpful in answering questions in biology (mentioned above) or in solving social problems (e.g. crime analysis). in case of analysing vertically divided phenomena (as artifacts in archaeological trench, birds’ nests, behaviour of plant or animal populations in 3d space), three-dimensional distance should be considered in nna. this distance depends also on the difference of elevation between objects. if we assume the phenomenon to be isotropic, it is possible to express nna as function of distance r. it is more probable that natural phenomena are anisotropic they behave geoinformatics fce ctu 11, 2013 25 stopková, e.: extension of mathematical background for nna in 3d space differently in horizontal and vertical direction. nna then could be a function of horizontal distance and vertical difference of elevation (or zenith angle). in another case, the phenomenon may be anisotropic in three dimensions. 2. keynote nna in three-dimensional space was outlined in proceedings [3] and [4]. the main principle of this work is based also on [2] that describes nna for two-dimensional space. the goal of this article on theoretical level is to complete the idea of statistical testing in three-dimensional space. there are n points located in the area of interest. their average isotropic distance between the nearest neighbours ra is expressed using arithmetic mean. if these points are randomly distributed, average isotropic distance of the nearest neighbours in three-dimensional space is expected to be (according to [4], formulas (674), (675)): re = ∞∫ 0 4πn · r3 · e− 4πn 3 ·r 3 dr = 1 3 √ 4πn 3 ∞∫ 0 e−x · 3 √ xdx = 1 3 √ 4πn 3 · γ( 4 3 ) where n is density of points in volume unit and γ( 43 ) is the gamma function 1. this formula is based on the probability that the nearest neighbour of a point is located on the boundary of its spherical surrounding with radius r. more detailed explanation of the topic can be found in [4] or in [3]. the measure of degree to which observed points are randomly distributed r is expressed as the ratio of observed and expected distance [2]. it may acquire these values: r = 1 → points are distributed randomly, because ra = re r = 0 → points are identical, because ra = 0 r = max → it is necessary to derive maximal value of ratio r. according to [2], if points are located on a hexagonal pattern, r in two-dimensional space reaches maximal value r2dmax = 2.1491 √ k where k means the number of segments in circle of infinite radius with a center in the observed point. the results of module testing (chap. 6), as well as the results obtained using analytical tool average nearest neighbor [6], do not correspond with this value. determining of r maximum will be purpose of further work (chap. 7). 3. test of statistical significance derivation of standard deviation of the average distance expected in case of randomly distributed points σre in three-dimensional space belonged in the purposes of this article. σre is needed to enumerate test statistic of student’s t-test of significance of the mean [7] that helps to assess the statistical significance of the deviation from a random distribution of points (clustering or, on the other hand, separation). method of derivation (appendix a) is analogical to method for two-dimensional space [2]. 1gamma function γ(n) means extension of factorial for complex and real numbers [5]. geoinformatics fce ctu 11, 2013 26 stopková, e.: extension of mathematical background for nna in 3d space null hypothesis h0: ra = re means that points are randomly distributed in the space (according to [8]). alternative hypothesis ha: ra < re ∨ ra > re and statistical tables (e.g. [9]) of the cumulative standard normal distribution are base for determination of critical values w : w = (−∞,uα/2) ∪ (u(1−α/2),∞) if null hypothesis about randomness of distribution in point dataset is rejected with positive values of the test statistic c, the points are assumed to be separated (patterned). if the values of the test statistic c are negative and the null hypothesis is rejected, the points are assumed to be clustered. 4. implementation of 3d nearest neighbour analysis the theoretical background described in the previous chapters was implemented in the module v.nn_spatial_stat developed in the environment of open-source software grass gis [1]. the module contains functionality of nna in two-dimensional space (implemented also in the analytical tool [6] of arcgis [10]) and solution for objects in three-dimensional space. the most significant complications during development were connected with determining of minimum bounding box (mbb)2. in two-dimensional space, the density of points depends on the area of surface where the points are located. according to [11], the area could be set up by user or it is possible to use area of minimum bounding rectangle (mbr). analogically, the density of points in three-dimensional space could be determined using volume of box set up by user or volume of mbb. methods how area (volume) of mbr (mbb) can be determined are principally quite similar: – coordinates of convex hull3 that covers the point set must be obtain. in 3d case it is necessary to know also reference of each vertex to the faces. partially modified functions of the module v.hull [13] that enables to build convex hull in new vector layer were used. output of modified functions is represented by the list of coordinates of vertices (or faces), not by new vector body. – the coordinates of vertices must be transformed to coordinate systems – which x axes are parallel to lines between neighbouring vertices in 2d case( xt yt ) = rx(σ) · ( x y ) where σ is bearing of the line, – which xy planes are parallel to planes of faces of the convex hull in 3d case  xtyt zt   = rxy(σx,σy) ·   xy z   2minimum bounding box encloses all features in vector dataset. its faces are paralel to planes of 3d coordinate system in special case only. 3convex hull is a cycle graph (body) that encloses all features in dataset. its vertices are located on the features in such way that all interior angles are less than 180◦ (according to [12]). geoinformatics fce ctu 11, 2013 27 stopková, e.: extension of mathematical background for nna in 3d space where the angles σx, σy may be expressed: tan(σx) = z1 −z0 y1 −y0 tan(σy) = z2 −z0 x2 −x0 and x, y, z are coordinates of vertices belonging to triangular face of convex hull. transformation matrix is based on rotation of x and y axes: rxy(σx,σy) = rx(σx) ·ry(σy) = =   cos(σy) 0 sin(σy)sin(σx) ·sin(σy) cos(σx) sin(σx) · cos(σy) cos(σx) ·sin(σy) −sin(σx) cos(σx) · cos(σy)   – extent of transformed coordinates should be determined, – area / volume of the extent must be counted and the values should be compared to obtain the smallest one. this value becomes input in determining the density of points. 5. testing of the module v.nn_spatial_stat for points located in two-dimensional space, the module was tested comparing numerical results with outputs of the analytical tool average nearest neighbor [6] that is part of the spatial statistics toolbox in the software arcgis 10.1 [10]. 3d variant of nna is not implemented in any of accessible softwares. that was the reason to verify numerical results by comparing them with values computed by scripts in the software mathematica [14] and matlab [15]. 5.1. testing in two-dimensional space the module was tested on various sets of synthetic data. configuration of observed points in the area of interest 20 km x 20 km was designed to represent all possible cases: randomly distributed points, separated points and clustered points. the samples of randomly distributed points were generated in the software grass gis [1] using the module v.random [16]. except numerical accuracy, process speed was tested too. the results are summarized in tables 1a and 1b. n = 2000 v.nn_spatial_stat average nearest neighbor [6] difference ra[m] 225.858627 225.858627 0.000 mm re[m] 223.227944 223.227944 0.000 mm r 1.011785 1.011785 0.000000 c 1.008239 1.008245 0.000006 a[m2] 398645718.650660 398645718.650315 -345 mm2 t[s] 0.107 3 table 1a: the results of testing of the module v.nn_spatial_stat in two-dimensional space using 2000 randomly distributed points geoinformatics fce ctu 11, 2013 28 stopková, e.: extension of mathematical background for nna in 3d space n = 5000 v.nn_spatial_stat average nearest neighbor [6] difference ra[m] 140.981861 140.981861 0.000 mm re[m] 141.307761 141.307761 0.000 mm r 0.997694 0.997694 0.000000 c -0.311984 -0.311986 -0.000002 a[m2] 399357668.904297 399357668.904439 142 mm2 t[s] 0.324 5 table 1b: the results of testing of the module v.nn_spatial_stat in two-dimensional space using 5000 randomly distributed points according to the test statistic c ∈ (−1.96; 1.96), it can be assumed that in both cases the points were randomly distributed. differences in the values of mbr area may be caused by the different way of storing data in computer memory, as the results of next experiments show. significantly shorter processing time (compared with the analytical tool average nearest neighbor [6]) may be reached, because the module does not generate report with graphical outputs. the condition of maximized separation in two-dimensional space is accomplished by points arranged in the pattern of equilateral triangles, i.e. the nearest neighbouring points are located around the observed point in the shape of regular hexagon (proofed in [2]). two datasets were generated, seven points arranged in the hexagon with centre and many points arranged in the hexagonal pattern. coordinates of the points were computed in the matlab [15] environment. tables 2a and 2b summarize the results of the tests. r = 250 m n = 7 v.nn_spatial_stat average nearest neighbor [6] difference ra[m] 250.000000 250.000000 0.000 mm re[m] 87.933894 87.933894 0.000 mm r 2.843045 2.843045 0.000000 c 9.328528 9.328585 0.000057 a[m2] 216506.350945 216506.350945 0.000 mm2 table 2a: the results of testing of the module v.nn_spatial_stat in two-dimensional space using maximally separated points (hexagon with centre) r = 250 m n = 4732 v.nn_spatial_stat average nearest neighbor [6] difference ra[m] 250.000000 250.000000 0.000 mm re[m] 140.773069 140.773069 0.000 mm r 1.775908 1.775908 0.000000 c 102.108229 102.108855 0.000626 a[m2] 375097253.014138 375097253.014137 -0.954 mm2 table 2b: the results of testing of the module v.nn_spatial_stat in two-dimensional space using maximally separated points (hexagonal pattern) geoinformatics fce ctu 11, 2013 29 stopková, e.: extension of mathematical background for nna in 3d space because of the test statistic c ∈< 2.58;∞), null hypothesis about random distribution of the points is rejected on the confidence level α = 0.01. the points are separated. clusters of points were created locating points around each of sample of n0 randomly generated points. coordinates of new points were computed in matlab [15] using bearings with step 36◦ and random distances (table 3a, table 3b). n0 = 100 σ = [0◦; 36◦; . . . 324◦] r = random(′normal′, 30, 10) v.nn_spatial_stat average nearest neighbor [6] difference ra[m] 20.121996 20.121996 0.000 mm re[m] 311.387518 311.387518 0.000 mm r 0.06462 0.06462 0.000000 c -56.586926 -56.587273 -0.000347 a[m2] 387848745.955230 387848745.955141 -89 mm2 table 3a: the results of testing of the module v.nn_spatial_stat in two-dimensional space using clusters of points with shorter distances from local centres n0 = 100 σ = [0◦; 36◦; . . . 324◦] r = random(′normal′, 300, 1000) v.nn_spatial_stat average nearest neighbor [6] difference ra[m] 301.259436 301.259436 0.000 mm re[m] 351.460991 351.460991 0.000 mm r 0.857163 0.857163 0.000000 c -8.641085 -8.641138 -0.000053 a[m2] 494099313.011801 494099313.011630 -171 mm2 table 3b: the results of testing of the module v.nn_spatial_stat in two-dimensional space using clusters of points with longer distances from local centres null hypothesis about randomly distributed points may be rejected on confidence interval α = 0.01 because c ∈ (∞;−2.58 >. negative values of the test statistic c indicate that the samples are clustered. the sample in table 3a in which clusters were generated using random distances with normal distribution n(30, 10) is characterized by significantly lower value of test statistic c as the sample in table 3b in which distances with normal distribution n(300, 1000) were used. 5.2. testing in three-dimensional space nowadays, there is no accessible tool for process nna in three-dimensional space, so it is not possible to compare the results of the module with outputs of any verified software. that is the reason to control the results using parts of code scripted in mathematical software mathematica [14] or matlab [15]. this method helped to verify numerical accuracy and to repair few bugs. the most complicated task was verification of volume of minimum bounding box (mbb). this value is based on coordinates of vertices belonging to faces of convex hull. coordinates of geoinformatics fce ctu 11, 2013 30 stopková, e.: extension of mathematical background for nna in 3d space convex hull are transformed to coordinate systems with plane of x and y axes parallel to plane of each face. output is volume of the smallest box extending the transformed coordinates. functions of module v.hull [13] have been used to obtain vertices belonging to faces of convex hull. these functions were modified to output not new vector layer with convex hull but matrices containing coordinates of vertices and faces. numerical accuracy of transformations and determining of volume of mbb were verified comparing results of module to outputs computed by matlab [15]. this method enables also to verify coordinates exported of convex hull that we suppose to be created correctly. other functions of the module were tested while debugging or they were tested as part of 2d nna functionality. for example, the function for computing average distance of the nearest neighbour ra that is identical for 2d and 3d case (only input z coordinates differ, they are zeros for two-dimensional space). numerical accuracy of formulas for determining expected average distance between the nearest neighbours in set of randomly distributed points re, ratio r and test statistics c were tested comparing to results obtained by mathematica [14] while deriving of formulas and debugging. the results of testing randomly distributed points are summarized in tables 4a and 4b. the same datasets as while testing in 2d space were used but z coordinates were considered. n = 2000 v.nn_spatial_stat matlab [15] script difference ra[m] 346.071782 re[m] 323.531486 r 1.06967 c 0.191691 v [m3] 398423031180.489 398423031174.718 -5.771 m3 t[s] 0.093 table 4a: the results of testing of the module v.nn_spatial_stat in three-dimensional space using 2000 randomly distributed points n = 5000 v.nn_spatial_stat matlab [15] script difference ra[m] 251.243012 re[m] 238.607135 r 1.052957 c 0.145707 v [m3] 399562811897.870 399562811870.293 -27.577 m3 t[s] 0.496 table 4b: the results of testing of the module v.nn_spatial_stat in three-dimensional space using 5000 randomly distributed points considering value of test statistics c ∈ (−1.96; 1.96), it is possible to assume that points were distributed randomly in both cases. differences in volume of mbb are inappreciable comparing them to the values. coordinates of point clusters in three-dimensional space were computed in surrounding of randomly generated points using random distances, bearings with step 36◦ and zenith angles geoinformatics fce ctu 11, 2013 31 stopková, e.: extension of mathematical background for nna in 3d space with step 10◦. the results of tests are shown in tables 5a and 5b. n0 = 100 σ = [0◦; 36◦; . . . 324◦] z = [−90◦;−80◦; . . . 90◦] r = random(′normal′, 30, 10) v.nn_spatial_stat matlab [15] script difference ra[m] 7.03274 re[m] 155.180422 r 0.04532 c -2.626741 v [m3] 430279744723.451 430279744656.574 -67 m3 t[s] 6.773 table 5a: the results of testing of the module v.nn_spatial_stat in three-dimensional space using clusters of points with shorter distances from local centres n0 = 100 σ = [0◦; 36◦; . . . 324◦] z = [−90◦;−80◦; . . . 90◦] r = random(′normal′, 300, 1000) v.nn_spatial_stat matlab [15] script difference ra[m] 182.221953 re[m] 349.816030 r 0.520908 c -1.318191 v [m3] 4784510371770.96 4784510371960.23 189 m3 t[s] 6.407 table 5b: the results of testing of the module v.nn_spatial_stat in three-dimensional space using clusters of points with longer distances from local centres 6. outline of future work it will be appropriate to enlarge testing of the module adding sample of points with maximal separation in three-dimensional space. analogically to two-dimensional space where neighbouring points are arranged in hexagon around observed point, it will be necessary to find convex body with vertices located on equilateral triangles and plane cutting its vertices and center should also be composed of equilateral triangles. body fulfilling condition that distances between vertices and centre definitely cannot be composed only of regular hexagons (proofed e.g. [17]: vertex of body is intersection of three polygons (faces) and sum of interior angles must be less than 360◦). except that, in case of putting few pieces of these bodies together, there should be no empty spaces between them. analysis of properties of truncated regular plato’s bodies described in timaeus [18] or semi-regular archimedes’ bodies [19] will be purpose of future work. the next item is to develop the mathematical background to better model fact that most of the phenomena may behave differently in horizontal and vertical direction. this analysis will be based on derivation of average distance of the nearest neighbours expected in case of geoinformatics fce ctu 11, 2013 32 stopková, e.: extension of mathematical background for nna in 3d space randomly distributed points re as function of two variables, horizontal distance h and vertical difference of elevation z: re = f(h,z) isotropic model of nna that had been derived in the past and in this article was tested and completed for 3d gis purposes may be useful for the applications mentioned in introduction, but it may be inappropriate for many phenomena on the earth’s surface. this assumption may be supported by e.g. differences found in tests’ results. in test of randomly distributed points, there were both the samples detected as randomly distributed points in twoand also in three-dimensional space (see tables 1a, 1b and tables 4a, 4b). but in nna of clustered points, results indicated randomness of distribution (tables 5a, 5b). this hypothesis may be verified testing the module on various samples including points located on three-dimensional pattern to compare behaviour of maximally separated points in 2d and 3d space. appendix a derivation of σre, the standard deviation of average distance between the nearest neighbours expected in case of randomly distributed points. the formulas have been derived analogically to the method [2] for two-dimensional space. variance of average distance between the nearest neighbours in dataset of randomly distributed points: var(r) = e(r2) − (re)2 (a.1) where e(r2) means dispersion. standard deviation is then: σre = √ var(r) (a.2) derivation of dispersion e(r2) e(r2) = ∞∫ 0 (4πn 3 · 3r2 · e− 4πn 3 ·r 3 ) · r2dr (a.3) it is necessary to express integral (a.3) as gamma function according to [5]: γ(z) = ∞∫ 0 e−x 2 · x2z−1dx (a.4) if z = 2/3, ∞∫ 0 4πn · r4 · e− 4πn 3 ·r 3 dr = 2 · ∞∫ 0 e−x 2 · x1/3dx (a.5) then x2 = 4πn 3 ·r3 geoinformatics fce ctu 11, 2013 33 stopková, e.: extension of mathematical background for nna in 3d space (a.61) r = (√ 3 4πn ·x )2/3 = ( 3x2 4πn )1/3 (a.62) and also (a means substitution constant): 4πn ·r4 = 2a ·x1/3 (a.7) derivation of constant a to simplify derivation, (a.7) may be expressed as 4πn·r4 = 2a·x1/3 ·x1/3 = 2a· ( x2 )1/3 (a.8) where aa.7 6= aa.8. a = 4πn ·r4 2 · (x2)1/3 = 2πn · ( 3x2 4πn )4/3 (x2)1/3 = 3 √ 34 25 ·πn ·x2 = 3 √ 34 25 ·πn · 4πn 3 ·r3 = 3 √ 6π2n2 ·r3 (a.9) after substituting (a.5) to (a.9): ∞∫ 0 4πn · r4 · e− 4πn 3·r3 dr = 2 · ∞∫ 0 e−x2 · x1/3dx 3 √ 6π2n2 = γ(2/3) 3 √ 6π2n2 = 1.35411793942640 3 √ 6π2n2 (a.10) derivation of variance var(r) var(r) = e(r2) − (re)2 = γ(2/3) 3 √ 6π2n2 − ( γ(4/3) 3 √ 4πn/3) )2 = = 1 3 √ 2π2 · ( γ(2/3) 3 √ 3 − 3 √ 9 · (γ(4/3))2 2 ) 3 √ n2 = 0.040536 3 √ n2 (a.11) standard deviation of average distance between the nearest neighbour in randomly distributed set of points will be: σre = √ var(r) (a.12) geoinformatics fce ctu 11, 2013 34 stopková, e.: extension of mathematical background for nna in 3d space references [1] grass development team (2013): geographic resources analysis support system (grass) software [computer software]. open source geospatial foundation project. available at: http://grass.osgeo.org. [2] clark, p. j., evans, f. c. (2003): distance to nearest neighbor as a measure of spatial relationships in populations. in ecology [online]. vol. 35, is. 4. october 1954 [cit. 2013-03-21], pp. 445-453. available at: https://courses.washington.edu/bio480/week1paper-clark_and_evans1954.pdf. issn 0012-9658. [3] hertz, p. (1909): über den geigenseitigen durchschnittlichen abstand von punkten, die mit bekannter mittlerer dichte im raume angeordnet sind. in mathematische annalen, 67: 387-398. according to: clark, p. j., evans, f. c. (2003). distance to nearest neighbor as a measure of spatial relationships in populations. in ecology [online]. vol. 35, is. 4. october 1954 [cit. 2013-03-21], pp. 445-453. available at: https://courses.washington.edu/bio480/week1-paper-clark_and_evans1954.pdf. issn 0012-9658. [4] chandrasekhar, s. (1943): the law of distribution of the nearest neighbor in a random distribution of particles. in reviews of modern physics. stochastic problems in physics and astronomy. vol. 15, 1-89, 1943 [cit. 2013-03-21], pp. 86-87. available at: http://rmp.aps.org/abstract/rmp/v15/i1/p1_1. issn 1539-0756. [5] weisstein, e. w. (2013): gamma function. in mathworld. a wolfram web resource [online]. 2013 [cit. 2013-03-21]. available at: http://mathworld.wolfram.com/gammafunction.html. [6] esri (2013): average nearest neighbor [computer software]. in arcgis desktop: release 10.1, spatial statistics toolbox [cit. 2013-05-14]. redlands, ca: environmental systems research institute. [7] hughes, i. g., hase, t. p. a. (2010): measurements and their uncertainities : a practical guide to modern analysis. 1. edition. new york : oxford university press inc., new york, 2010. 136 p. isbn 978-0-19-956633-4. [8] esri (2012): what is a z-score? what is a p-value? in arcgis help 10.1 [online]. 2012 [cit. 2013-03-21]. available at: http://resources.arcgis.com/en/help/main/10.1/index.html #/what_is_a_z_score_what_is_a_p_value/005p00000006000000/. [9] karpíšek, z. (2004): statistické tabulky [statistical tables]. institute of mathematics fsi vut in brno [online]. 2004 [cit. 2013-06-03]. available at: http://mathonline.fme.vutbr.cz/default.aspx?section=2&article=121&highlighttext=tabulky [10] esri (2013). arcgis desktop: release 10.1. redlands, ca: environmental systems research institute. [11] esri (2013): average nearest neighbor (spatial statistics). in arcgis help 10.1 [online]. 2013 [cit. 2013-05-14]. available at: http://resources.arcgis.com/en/help/main/10.1/index.html#//005p00000008000000 geoinformatics fce ctu 11, 2013 35 http://grass.osgeo.org https://courses.washington.edu/bio480/week1-paper-clark_and_evans1954.pdf https://courses.washington.edu/bio480/week1-paper-clark_and_evans1954.pdf https://courses.washington.edu/bio480/week1-paper-clark_and_evans1954.pdf http://rmp.aps.org/abstract/rmp/v15/i1/p1_1 http://mathworld.wolfram.com/gammafunction.html http://resources.arcgis.com/en/help/main/10.1/index.html#/what_is_a_z_score_what_is_a_p_value/005p00000006000000/ http://resources.arcgis.com/en/help/main/10.1/index.html#/what_is_a_z_score_what_is_a_p_value/005p00000006000000/ http://mathonline.fme.vutbr.cz/default.aspx?section=2&article=121&highlighttext=tabulky http://resources.arcgis.com/en/help/main/10.1/index.html#//005p00000008000000 stopková, e.: extension of mathematical background for nna in 3d space [12] nelson, m. (2007): the convex hull. in computer science, web articles [online]. 2007 [cit. 2013-05-30]. available at: http://marknelson.us/2007/08/22/convex/ [13] aime, a., neteler, m., ducke, b., landa, m. (2010): v.hull [computer software]. in geographic resources analysis support system (grass) software. available at: http://trac.osgeo.org/grass/wiki/downloadsource#grass7 [14] wolfram research, inc. (2008). mathematica [computer software]. version 7.0. champaign : wolfram research, inc. [15] mathworks, inc. (2010). matlab [computer software]. version 7.11.0. natick, massachusetts: the mathworks, inc. [16] mccauley, j. d., landa, m. (2010): v.random [computer software]. in geographic resources analysis support system (grass) software. available at: http://trac.osgeo.org/grass/wiki/downloadsource#grass7 [17] vallo, d. et al. (2012): geometria telies.. .všeobecne a pútavo [geometry of bodies. . . in general and grippingly]. faculty of natural sciences, constantine the philosopher university in nitra [online]. 2012 [cit. 2013-05-30]. available at: http://www.km.fpv.ukf.sk/admin/upload_pdf/20121108_145712__0.pdf. isbn 978-80-558-0106-3. [18] zeyl, d. (2013) plato’s timaeus. in the stanford encyclopedia of philosophy (spring 2013 edition) [online], edward n. zalta (ed.). [cit. 2013-05-30].available at: http://plato.stanford.edu/archives/spr2013/entries/plato-timaeus/. [19] weisstein, e. w. (2013): archimedean solid. in mathworld. a wolfram web resource [online]. 2013 [cit. 2013-05-30]. available at: http://mathworld.wolfram.com/archimedeansolid.html. geoinformatics fce ctu 11, 2013 36 http://marknelson.us/2007/08/22/convex/ http://trac.osgeo.org/grass/wiki/downloadsource#grass7 http://trac.osgeo.org/grass/wiki/downloadsource#grass7 http://www.km.fpv.ukf.sk/admin/upload_pdf/20121108_145712__0.pdf http://plato.stanford.edu/archives/spr2013/entries/plato-timaeus/ http://mathworld.wolfram.com/archimedeansolid.html panoramic uav views for landscape heritage analysis integrated with historical maps atlases raffaella brumana, daniela oreni, mario alba, luigi barazzetti, branka cuca, and marco scaioni politecnico di milano, piazza leonardo da vinci 32, milan, italy raffaella.brumana@polimi.it abstract analysis of landscape heritage and territorial transformations dedicated to its protection and preservation rely increasingly upon the contribution of integrated disciplines. in 2000 the european landscape convention established the necessity ‘to integrate landscape into its regional and town planning policies and in its cultural, environmental, agricultural, social and economic policies’. such articulated territorial dimension requires an approach able to consider multi-dimensional data and information from different spatial and temporal series, supporting territorial analysis and spatial planning under different points of view. most of landscape representation instruments are based on 3d models based on top-down image/views, with still weak possibilities to reproduce views similar to the human eye or map surface development along preferential directions (e.g. water front views). a methodological approach of rediscovering the long tradition of historical water front view maps, itinerary maps and human eye maps perspective, could improve content decoding of cultural heritage with environmental dimension and its knowledge transfer to planners and citizens. the research here described experiments multiple view models which can simulate real scenarios at the height of observer or along view front. the paper investigates the possibilities of panoramic views simulation and reconstruction from images acquired by rc/uav platforms and multisensory systems, testing orthoimage generation for landscape riparian areas and water front wiew representation, verifying the application of automatic algorithms for image orientation and dtm extraction (atipe, ate) on such complex image models, identifying critical aspects for future development. the sample landscape portion along ancient water corridor, with stratified values of anthropogenic environment, shows the potentials of future achievement in supporting sustainable planning through technical water front view map and 3d panoramic views, for environmental impact assessment (eia) purposes and for the improvement of an acknowledged tourism within geo-atlas based on multi-dimensional and multitemporal spatial data infrastructures (sdi). keywords: landscape heritage, uav, image orientation, panoramic views, historical maps 1. introduction the paper relates on-going experiments driven by the necessity to provide new scenarios for retrieving geospatial knowledge of territory and instruments capable of managing informageoinformatics fce ctu 9, 2012 39 brumana, r. et al.: panoramic uav views for landscape heritage analysis . . . tion to better addresses landscape heritage policies. in this sense, the advanced geo-web instruments can give a contribution to support the preservation of ancient traces of precious anthropogenic environment, partly recognizable within the context and to provide a better comprehension of the participated citizen culture of territory. experiences carried out within the atl@nte geoportal (an on-line open-source atlas of historical cadasters and topographic maps of lombardy, www.atlantestoricolombardia.it), have provided a valid ground to compare the current landscape with different historical layers. furthermore, they ask to enhance the comprehension of such complex areas introducing innovative representation and rediscovering the semantic content potential of ancient views for anthropogenic landscape interpretation and identity recognizing process by the people, unavoidable elements during the preservation process. a significant and well representative case study in the italian alpine foothills (pre-alpine region) is reported and discussed. the possibility of utilising image sequences gathered by unmanned aerial vehicles (uav) combined with existing digital terrain models (dtm) was investigated in order to obtain 3d textured models for landscape analysis, especially in areas featuring strong vertical edges (built environment of hills, coast, mountains, and the like). a model helicopter, flying at few tens of metres from the ground, has been tested to acquire images all around a given point. the composition of the images to form panoramic views can be used to reproduce the ‘human’ point of view at a certain location. this kind of landscapes cannot be represented in a realistic way using airborne imagery which are normally applied for state-of-the-art landscape 3d modelling. according to landscape policies in eu and italian frameworks, the necessity to develop new tools for representation to be used for landscape planning simulation (sect. 1) has been discussed. these include experimentation of rc/uav images for panoramic views reconstruction (sect. 2). a case study area has been selected to support ancient view simulation for scenario analysis and touristic valorisation of historic itinerary road (sect. 3). the problem of image-orientation for non-conventional image acquisition dedicated to orthoimage projection of non-conventional views has been discussed relating to sensor system rc/uav platform and gps digital camera (sect. 4). the promising potentials of this approach for the future strengthening of e-contents through web atlas and modern devices are illustrated in section 5. 1.1. eu and italian legislative framework for landscape planning and scenarios simulation the italian legislative framework on the protection of the landscape has been developed according to the european landscape convention [16], stipulated in florence in 2000 and ratified by the italian government in january 2006. here the landscape of every country includes the historical, monumental and the natural characteristics of the territory, considered as part of the cultural heritage of all european citizens. identity and recognition become two key features of the landscape quality and contribute to the formation and the increasing of the individual and social quality of life. the landscape thus becomes a resource for sustainable development of all countries, with the result that the whole territory must be considered in the plans and programs to enhance the landscape, with the attention directed not only to the ‘exceptional’ places, but also to the ‘everyday life landscapes and degraded landscapes’. acting on the recommendations contained in the european spatial development perspective geoinformatics fce ctu 9, 2012 40 brumana, r. et al.: panoramic uav views for landscape heritage analysis . . . (esdp), prepared by the eu in may 1999 in potsdam, the eu landscape convention also imposes the need for each country to integrate landscape into regional and town planning policies, in cultural, environmental, agricultural, social and economic policies, as well as in any other policies with possible direct or indirect impact on landscape. the objective is to formulate general application principles, strategies and guidelines that permit the taking of specific measures aimed at the protection, management and planning of landscapes. in this frame, the code of cultural heritage and landscape (dl2 n.42/2004, [18]) stated the procedures for the landscape granting, assigning the tasks of monitoring projects to the architectural heritage and landscape office, the italian ministerial body in charge of protection, called upon to make an assessment on the project proposals. its opinion has a binding force for the regions, the agency delegated for the final authorisation, which in turn must ensure that the projects will respect the regulatory guidelines for provincial and municipal planning, as contained in the regional landscape territorial plan. the legislative objective, through this whole process of approval, is to lead transformation of landscape taking into account the morphology of the places, the scenic and environmental context, and the traces of their history, and not to overlap uncritically and brutally to existing landscape. the code made compulsory the submission of the ‘report on landscape’ together with the application for authorization, both essential for the assessment, made by the competent authority, of the project in relation to the elements of landscape value, highlighting the impact of the projects on the landscape and the elements of mitigation and compensation required. furthermore, landscapes along the water paths and coastal areas are considered as regions of a great strategic importance for the eu hence in this sense, inspire directive [18] together with shared environment information system (seis) are identified as main tools to facilitate the information flow in these areas. regarding the assessment of the landscape compatibility of the transformation proposed, it is prescribed to submit synthetic analyses of both state-of-the-art and the project. they must include not only descriptions, current or historical map extracts, but especially detailed simulations, made by ‘realistic photo-modelling that includes an appropriate area around the project, calculated from the ratio of existing intervisibility’. the ‘photographic representation’ of the project area and of the landscape contest must be taken from accessible places and/or scenic routes, as specified by the scottish natural heritage for landscape management environmental impact assessment (eia) [17]. it must include fronts, skylines and visual perspective from which the transformation is visible, with particular reference to high visibility areas (e.g. slopes, coasts). in this context, the development of innovative 3d metric representation of the landscape is an improvement to the traditional 2d photographic representations of panoramic views. 2. sensors for landscape image processing: state of art development of rc/uav sensors for documentation, inspection and surveying of cultural heritage is a field partially explored with success in archeology [5], by inheriting the horizontal flight of aerial photogrammetry and image orientation.with reference to landscape and environmental heritage domain, different problems have to be investigated in photogrammetric rc/uav applications, regarding image acquisition and orientation for 3d reconstruction of landscape with a complex morphology, needing unconventional camera poses, vertical images or/and oblique images. the need of an accurate flight planning and control requires data acquisition to be carried out along a regular path. indeed, photogrammetric surveys geoinformatics fce ctu 9, 2012 41 brumana, r. et al.: panoramic uav views for landscape heritage analysis . . . figure 1: atl@s web geoportal and the landscape area test along thematic axe of the lambro river need a block of images taken from different points of view at suitable scales and baselines. the integration with gnss/ins [3,4,6] sensors allowed to obtain satisfying results, at the landscape scale. camera calibration and image orientation have reached a high level in term of automation [8]. in the case nadiral images, the use uncontrolled flight path under the condition of well-targeted ground control points (gcp) enable the application of algorithms for automatic orientation. research on fully automatic uav image-based sensor orientation [8] opens opportunities in contexts like water view fronts devoted to metric image processing and information extraction.moving the point of view, the gcp detection for the complex schema image block become difficult, especially in the case of very low cost uncontrolled flight, that ask to develop orientation and 3d modelling from markerless images [11] considering the wide spread potential of similar applications. controlled sensor systems could be matched with the results obtained by multiple views 3d texturized models in the computer vision domain (acute3d, http://acute3d.com/) in order to obtain technical instrument for line-of-sight analysis. 3. sdi integration with ancient view simulation for scenario reconstruction and touristic valorisation of historic itineraries landscape heritage analysis, safeguarding and preserving can benefit from the integration of multi spatial-temporal data within a spatial data infrastructure (sdi), rediscovering ancient map views of territory and different point of views of landscape observation [14]. the atl@nte geoportal has provided a valid ground to deeper investigate and compare the current landscape with different stratified historical layers (fig. 1) through the wms/wfs generated on georeferenced ancient cadastral maps related to the current map (google® imagery, 3d orthophoto maps, such as terraitaly®, large scale technical map, etc.). such a geo-metric grid allows to superimpose different chronological series, extracting information about transformations, permanences and mutations, especially if strictly related to the in situ data collection on the stratigraphic units, based on chrono-typological and archeometric approach [2]. a study case along water axes within atl@nte was selected in the italian pre-alpine region. motivation of this choice were the rich historic background and the complex landscape morphology of natural and built environment. both aspects showed the potentials of providing new scenarios for retrieving geospatial knowledge of territory and instruments capable to supgeoinformatics fce ctu 9, 2012 42 brumana, r. et al.: panoramic uav views for landscape heritage analysis . . . port landscape heritage knowledge, safeguarding policies, addressing planning and divulgation instruments among citizens for sustainable tourism. rediscovering the perception of landscape value and synthesis represented in the past through observer point of views, generating ‘vedute’ and perspective maps, water front views of portion of landscape faced on the inland water paths or lakes, that can be very useful if gradually integrated within modern geo-portal, introducing an added value of territorial map representation. architectural survey is usually obliged to move the point of view respect different plans of interest in function of the different structural elements (vertical fronts, facades, horizontal planes, floors, vaults). on the other hand, landscape representation is progressively trying to overcome the unique traditional nadiral privileged point of view represented by the aerial images acquired for current 2.5d cartography. different point of views have to be gradually incorporated, as in the case of 3d city model navigation (e.g. google earth®) obtained by the individual projection of rectified images on building facades. the experimentation of mobile sensor platform image acquisition using an rc helicopter to reconstruct has been planned to reconstruct ancient panoramic views that represent an important witness of the great value assigned to this portion of landscape. the manifold aspects of interest will be addressed to in section 1.2. in the conviction that rediscovery of ancient views can contribute to the rediscovery of the stratified values to preserve this area within planning policies and to valorise divulgation of touristic circuits, helping perception by the people using geo-portal, smart phone devices to strengthening comprehension of the historic geography of such places. in particular, a test was carried out on a simulation of ancient views generally acquired from privileged point of view (such as bell towers) on ‘strategic’ zones of territory, as in the case of ‘non-rigorous’ or quasi-perspective maps, such as the ‘pastoral visits to the pievi’ by san carlo borromeo in lombardy, xvi century (fig. 3). panoramic image acquisition by rc/uav flight simulation has been experimented in order to verify the potentials and critical aspects for innovative representations of stratified multi-spatial/multi-temporal landscape reconstruction of complex strategic areas developed in the centuries along the ancient great itinerary roads. the sample test-site represents an important node along the roman road from aquileia to the north europe, developed upon neolithic settlement discovered around the area for favourable climate condition, water resources. this area featured a rich anthropogenic stratified territory with a natural defensive morphology. celtic, roman and lombard archaeological settlement were found in the ‘park of barro’, and with the important lombard-benedictine monastery of san pietro al monte, civate. despite the richness of this area we miss a synthetic view at 360circ able to transfer those values to avoid the destroying intervention recently occurred by the new tunnel access exactly in correspondence of this node. an alternative solution, not completely far from the adopted one, would have more safeguarded this place. a possible contribution to the divulgation of a participated citizen culture, by preserving ancient traces of precious landscape values conserved till now, can be given by advanced geo-web instruments. atl@s web can support historic geography and stratified ancient traces here represented on the traditional dtm and satellite imagery. the immediate perception given by the geospatial representation can be improved by privileged point of view of interest that can be usefully integrated to enhance the comprehension of beautiful natural and anthropogenic context geoinformatics fce ctu 9, 2012 43 brumana, r. et al.: panoramic uav views for landscape heritage analysis . . . figure 2: the sample landscape area, on the lake of annone closed to the access to the lake of como, is placed along an important node of the roman road from aquileia to the north europe. the principal stakeholder stratified landscape element are signalized. figure 3: images acquired from ground level (left) do not provide an overall view of the ancient map (right). panoramic images (centre) acquired at the height of the bell tower using rc platform flight, allowed to reconstruct the privileged point of view of xvi century map of the pastoral visit to the pieve by s. carlo borromeo acquired from the bell tower of the church s. eufemia and romanic baptistery. unknown by the people at the local municipality and global level. 4. non controlled flight and controlled sensor system for landscape orthoimages generation from panoramic views one of the aims of this project is the creation of panoramic images in order to texturize different landscapes. since the approach is based on images taken from the ground instead of imagery used for aerial photogrammetric applications, some occlusions can be present. they can degrade the quality of the final results. this means that several objects located between the camera position and the investigated elements can be mapped generating a false result. on the other hand, this problem could be (partially) solved by employing images taken from different positions, with an interactive selection of the portions to be used. images acquired by an rc model helicopter have been used for high resolution modelling of the ground, tested on a portion of terrain with measured gcps, while images acquired by digital camera integrated with gps allows to solve orientation problem in order to obtain a 3d ortho-projection of a water view front portion without gcps. geoinformatics fce ctu 9, 2012 44 brumana, r. et al.: panoramic uav views for landscape heritage analysis . . . 4.1. flight test and panoramic uav images: data processing critical aspect automatic image orientation by atipe and automatic high resolution dtm extraction by using pmvs2 or lps eate techniques are here discussed with reference to the sensor system equipped on the platform and to the context. furthermore, uav images have been also used for high resolution modelling of the ground, tested on a portion of terrain with measured gcps. in this experience, an automatic processing pipeline for the production of high resolution orthoimages has been setup. this includes automatic image orientation by atipe [8] and automatic high resolution dtm extraction by using pmvs2 or lps eate techniques. the former can be used to compute the exterior orientation (eo) through aerial triangulation of images taken with a calibrated camera, without any user interaction. in order to produce high resolution orthoimages or textured models from airborne sensors, terrain models derived from oriented uav images using state-of-the-art matching techniques will be further investigated. eventually, different software packages for orthoimage production from uav data have been tested and evaluated according to image configuration and resolution. to check the metric accuracy achievable with the uav system a photogrammetric block made up of 25 images was acquired in the area test over the municipality of oggiono (lombardy, italy) on the top of a small hill with a dominant horizontal extension (fig. 4). the uav platform gives the opportunity to mount different types of cameras, according to the aim of each specific survey. in this case, a calibrated nikon d90 (4288x2848 px) equipped with a 20 mm sigma lens was employed. this camera has also the capability of acquiring video sequences. the network geometry includes images with a high overlap. some of these were also quite oblique in order to obtain a better reconstruction of the elements over the horizontal plane (e.g. walls, columns, statues). the images were oriented with the atipe procedure [11], using sift features automatically extracted for several combinations of image pairs. to obtain a georeferenced result, 20 targets were placed on the ground. they were measured with a combined theodolite/gps survey. 9 points were employed as gcps in the bundle adjustment, while the remaining ones were used to check the quality of the estimated eo parameters. the coordinates of the centres of all targets were measured with the lsm algorithm, obtaining very precise images coordinates. the average height the above ground during the flight was ca 14m. rmse values found were 5mm in the horizontal plane (x-y) and 17mm along the vertical direction (z). figure 5 shows a panoramic image generated by combining different pinhole shots. the object to be mapped is on the other side of the lake, facing the roman road, the benedictin monastery and the park of barro. however, several other images were collected to obtain a final panorama with a large field of view, imaging also object in the nearby. a global visualization of the image can give nice results (although the weather conditions was not optimal due to mist). however, the final image is quite clean in correspondence of the target object, while for closer elements (e.g. trees, bell tower) some parallax errors were found. 4.2. panoramic image for 3d spatial database texturing the availability of an accurate 3d spatial database of the region allows to obtain a model usable for landscape representation purposes. this approach overcomes dtm extraction in order to generate panoramic photo projection of view front images acquired along riparian areas. landscape visualization and photomontage experiments in the field of line-of-sight geoinformatics fce ctu 9, 2012 45 brumana, r. et al.: panoramic uav views for landscape heritage analysis . . . figure 4: the uav system, an image of the area and the scheme of theodolite/gps measurements. figure 5: the panoramic image generated from a model helicopter from the ancient view point. map integration are beginning to implement visual impact assessment tools [12] in order to enhance metric approach with respect to qualitative approach and passive auto stitching software packages [1]. advanced algorithms developed for textured 3d models on panoramic images acquired by controlled sensor have been utilised. integration of gps sensors with digital camera has simulated fully equipped flight control experimented in case of close range acquisition by ground instead of by flight; implementation of rc platform with this sensor system would allow to enhance the orientation of such geometry for orthoprojection purposes. here is described a methodology for automatically texture a 3d spatial database by oriented panoramic image and the results of the test application carried out on the shore of lake como, at the appendix node of the area test. in order to obtain a more complete acquisition of the lake-front different images have been acquired for each standpoints (fig. 6). photogrammetric surveys was carried out with a full-format nikon d700 (4256x2832 pixels) camera with calibrated 90mm lens. each image was georeferenced with the so called ‘photo-gps’ technique [9], where rtk-gps measures of the camera positions and image are acquired together. for each position a set of images was acquired with ‘photo-gps’ system, maintaining the same coordinates of image location while the orientation angles changed. three panoramic images were generated from the images acquired at different stand points (fig. 7). the tie points of panoramic images were automatically extracted. finally the camera positions and photogrammetric observations are adjusted together in order to obtain the eo parameters in wgs84 reference system. the oriented panoramic image has been automatically projected on the 3d spatial database (1:2,000 scale) and the first results are shown in figure 8. 5. conclusions with reference to the sensor system, the implementation of controlled flight equipped on the platform with camera gps allows to support panoramic image processing using automated geoinformatics fce ctu 9, 2012 46 brumana, r. et al.: panoramic uav views for landscape heritage analysis . . . figure 6: (left) test site, in red different standpoints and the lines-of-sight. (right) ‘photogps’ system during the acquisition of images. figure 7: two of the three panoramic image acquired from different stand-points techniques for image orientation and 3d surface reconstruction. many critical aspects of the panoramic image projection of water coast views to obtain metric water front has to be faced and deeper investigated: segmentation of orthoimage projection on a 3d spatial database with the projection of portions of panoramic views along the sloped coast areas, analysis of geometric congruence and condition to synchronize the exact projection of the built up portion located on the second and third plane respect the foreground acquired on the close up building faced on the coast line. the research will continue the tests on these products, to support metric textured model generation and management within easy access software environment used by planners and professionals in order to generate a widely useful technical instrument. developing multiple-choice representation would straighten the comprehension of complex scenarios, such as historic road and ecosystem corridors rediscovering new interpretation of the ancient itinerary maps and bird eye perspective contents. modern devices integration to web geo-portal (such as smart phone, ipad applications) could allow an on-site touristic divulgation and data sharing with local growth of sustainable touristic itineraries and identity process of knowledge and consciousness by the citizen. the sample area on landscape portion along water corridor with multiple stratified values of anthropogenic environment has shown the potentials of future achievements in this research field through a complementary integration of innovative technical map within atlas based on multi-temporal sdi: they contribute to enhance the comprehension of complex landscape portions, rediscovering the semantic content of ancient views for anthropic landscape analysis and identity recognizing process by the people, an unavoidable component in the preservation process. references [1] m.brown, d.g. lowe, 2005. automatic panoramic image stitching using invariant features, int. journal of computer vision, volume 74, number 1, 59-73, doi: 10.1007/s11263-006-0002-3. geoinformatics fce ctu 9, 2012 47 https://springerlink3.metapress.com/content/?author=matthew brown https://springerlink3.metapress.com/content/?author=david g. lowe https://springerlink3.metapress.com/content/0920-5691/ https://springerlink3.metapress.com/content/0920-5691/74/1/ brumana, r. et al.: panoramic uav views for landscape heritage analysis . . . figure 8: figure. 8 (above) images of topographicdb automatically textured by oriented panoramic images. (below) vrlm model render images done within modelling sw environment (3dstudiomax) [2] brumana, r., achille, c., prandi, f., oreni, d., (2006). 3d data model for representing an historical centre site udms'06 25th urban data management symposium udms aalborg, denmark 1-8 9 part iv [3] colomina, i. et al, 2007. the uvision project for helicopter-uav photogrammetry and remote-sensing. 7th geomatic week, spain. [4] wang, j., 2008. integration of gps/ins/vision sensors to navigate unmanned aerial geoinformatics fce ctu 9, 2012 48 brumana, r. et al.: panoramic uav views for landscape heritage analysis . . . vehicles. iaprssis, 37(b1): 963-9. [5] bendea, h.f., chiabrando, et altri, 2007. mapping of archaeological areas using a lowcost uav the augusta bagiennorum test site. in proc. xxi int. cipa symp, athens, greece, on cdrom. [6] bento, m.d.f., 2008. unmanned aerial vehicles: an overview. insidegnss (jan/feb 2008): 54-61. [7] eugster, h., s. nebiker, 2008. uav-based augmented monitoring – real-time georeferencing and integration of video imagery with virtual globes. iaprssis, 37(b1): 1229-1235 [7] barazzetti l., remondino f., scaioni m., brumana r., 2010. fully automatic uav image-based sensor orientation. in proc. of isprs comm. i symp., calgary (canada), iaprssis 38(1), on cdrom, 6 pp. [8] forlani, g., pinto, l., 2007. gps-assisted adjustment of terrestrial blocks. in: proc. of the 5th int. symp. on mobile mapping technology (mmt’07). padova, issn 1682-1777, cd-rom, pp1-7. [9] wang, m., bai, h., hu, f., 2008. automatic texture acquisition for 3d model using oblique aerial images. first international conference on intelligent networks and intelligent systems (icinis 2008), pp. 495-498, wuhan, china. [10] l. barazzetti, f. remondino, m. scaioni (2010). orientation and 3d modelling from markerless terrestrial images: combining accuracy with automation. the photogrammetric record, (pp. 356381). [12] r. berry, g. higgs, m. langford, r. fry, 2010. an evaluation of online gas-based landscape and visual impact assessment tools and their potentials for enhancing public participation in the uk, webmgs 2010, xxxviii-4/w13 [11] remondino, f., rizzi, a., 2010: reality-based 3d documentation of natural and cultural heritage sites – techniques, problems and examples. applied geomatics, vol.2(3): 85-100 [12] cuca b., brumana r., scaioni m., oreni d. [2011], “spatial data management of temporal map series for cultural and environmental heritage”, international journal of spatial data infrastructure research (ijsdir) vol. 6 (2011). [13] m. santana quintero, k. van balen, (2009) “rapid and cost-effective assessment for world heritage nominations”, 22nd cipa symposium, october 11-15, 2009, kyoto, japan [14] european landscape convention, council of europe, florence, 20.x.2000. european spatial development perspective (esdp) postdam, may 1999. http://ec.europa. eu/regional_policy/sources/docoffic/official/reports/som_en.etm [15] scottish natural heritage (2008). http://www.snh.org.uk/publications/online/ heritagemanagement/eia/appendix1.shtml [16] inspire eu directive (2007). directive 2007/2/ec of the eu parliament and of the council (14 march 2007) establishing an infrastructure for spatial information in the eu community, official journal of the european union, l 108/1(50), 25th april 2007. geoinformatics fce ctu 9, 2012 49 http://www.springerlink.com/content/l7pm52003v078472/ http://www.springerlink.com/content/l7pm52003v078472/ http://www.springerlink.com/content/121279/?p=d7033c10a065426e965210fa13fc2971&pi=0 http://ec.europa.eu/regional_policy/sources/docoffic/official/reports/som_en.etm http://ec.europa.eu/regional_policy/sources/docoffic/official/reports/som_en.etm http://www.snh.org.uk/publications/online/heritagemanagement/eia/appendix1.shtml http://www.snh.org.uk/publications/online/heritagemanagement/eia/appendix1.shtml geoinformatics fce ctu 9, 2012 50 system software testing of laser tracker leica at401 filip dvořáček department of special geodesy, faculty of civil engineering czech technical university in prague thákurova 7, 166 29 prague 6, czech republic filip.dvoracek@fsv.cvut.cz abstract the article introduces a group of instruments called laser trackers and specifically focuses on one of them leica at401. at the research institute of geodesy, topography and cartography the instrument has been tested both in laboratory and outdoor conditions. several significant errors in the instrument’s system software have been found, mostly due to the creation of user-programmed controlling application called atcontrol. the errors are related to a selection, a computation and an evaluation procedure of the refractive index of air. finally, notes of the new measurement modes of leica at40x are given and a range of distance measurement is discussed. keywords: laser tracker, absolute distance measurement, leica at401, system software error, group refractive index of air 1. introduction a group of instruments called absolute laser trackers has caused a small revolution in the field of length metrology. although these devices are meant to be mainly used in industrial metrology (car and aircraft industry), the impact of laser trackers is far wider and also effects engineering geodesy. relative precision as well as overall absolute accuracy of distance measurement has brought about new possibilities for surveyors and metrologists in determining the fundamental length unit. laser trackers have become parts of laboratory equipment all over the world [1], as they are tools for creating standards and etalons and they provide metrological traceability to the definition of the meter. the leica at401 instrument was introduced to the public in 2010. its successor at402 with similar technical specifications and the same system software followed in 2013. the letters at stand for absolute tracker, signifying that instruments implement a new technology called absolute distance measurement. it is currently capable of superior accuracy stated by the manufacturer as a standard deviation of 5 µm over the whole 160 m working range [2]. parameters of all up-to-date edm (electronic distance measurement) devices, integrated in geodetic total stations, are not even close to this point. during past decades, ageing predecessor kern mekometer me5000 with its accuracy 0.2 mm + 0.2 ppm × d km (d = measured distance) had held the status of the most powerful instrument for long distance measurement [3]. but recently, laser tracker systems have taken over this role and stand in the focus of current interest and research. geoinformatics fce ctu 13, 2014, doi:10.14311/gi.13.6 49 http://orcid.org/0000-0003-4336-056x http://dx.doi.org/10.14311/gi.13.6 dvořáček f.: system software testing of laser tracker leica at401 research institute of geodesy, topography and cartography (rigtc/vúgtk) department of metrology and engineering geodesy deals with both the laboratory and field testing of leica at401 in order to employ this instrument in the calibration process of the czech state long distances measuring standard koštice [4] and national geodetic baseline hvězda. this article describes the system software errors found by the author and introduces some practical issues which occur during field measurement. because of the lack of suitable geodetic software solutions on the market, the user-programmed mathworks matlab application called atcontrol is used for communication with leica at401 during both testing and measuring. 2. system software errors 2.1. the computation procedure of the group refractive index of air if you want to know the way the refractive index is computed by the instrument, you have to contact the manufacturer. from leica, a document describing this procedure was obtained on 5th may 2013 [5]. according to this paper and practical testing, at401 uses equations derived from edlén´s formula. a study was made in which 10 most popular procedures of indirect computation of group refractive index of air is compared and analyzed [6]. all formulae were derived from primary information sources. edlén´s equation [7], originally published in 1966 and nowadays recognized by the community as inaccurate in terms of humidity, posted the worst result in the study. it reached nearly a 0.5 ppm difference from the ciddor & hill procedure (1996 [8], 1999 [9]) recommended by a resolution of the international geodetic association (iag) in 1999 in birmingham [10]. leica’s modified formula showed no significant difference from edlén’s. although the testing parameters were quite extreme (up to 35 °c air temperature and 100% relative humidity) they are not unrealistic concerning the instrument´s operating range (up to 40 °c and 95%) [11]. figure 1: comparison of methods for evaluating the group refractive index of air there is a half-done solution in the system software prepared by leica [12]. by changing weather monitor status from calculate (default) to read only or off, the user may implement their own computation procedure of the refractive index of air. this is exactly the way the complex ciddor & hill procedure and also others were added to the atcontrol software to offer the possibility to make a choice. unfortunately, there is doubt that ordinary users geoinformatics fce ctu 13, 2014 50 dvořáček f.: system software testing of laser tracker leica at401 are able to apply this functionality because available commercial software solutions do not implement it. therefore, only the default setting is usually available for measuring. 2.2. wavelength of the adm in the paper from leica [5] you can find that at401 operates on 795 nm wavelength. in the instrument’s manual 780 nm is declared [11]. a technical support reacted that the mistake is in the manual. to confirm this, a simple test including measuring with recording atmospheric parameters and refractive indices and theoretical calculations of given equations (1) and (2) was performed. unfortunately, the measured and calculated results did not fit together, and therefore more research had to be done. ngr_ppm .= ap [ 1 + 10−6p (0.613 − 0.010 t) 1 + 0.0036610 t ] − br 10 7.5 t t +237.3 +0.6609 (1) ngr .= 1 + ngr_ppm 1000000 (2) both 780 and 795 nm was now considered along with a variety of constants a and b (table 1) given by leica and calculated as stated in a presentation [13] of munich university (3), (4). constant b for 780 nm trackers which has been received from leica is not in congruence with the equations, but this has a negligible impact on the results and will not be discussed here anymore. a = 86.8109 · 10−3 + λ 25.03792 130 − 1 λ2 + 0.16647 38.9 − 1 λ2   25.03792( 130 − 1 λ2 )2 + 0.166467( 38.9 − 1 λ2 )2   (3) b = 572.2 − 13.71 λ2 (4) table 1: constants of the refractive model for leica at401 constant for trackers 780 nm for trackers 795 nmgiven by leica calculated given by leica calculated a 0.2917349 0.2917349 0.2914269 0.2914269 b×10−6 556.68 549.67 550.51 550.51 on the other hand, an incorrect substitution of 780 nm instead of 795 nm has much more serious consequences. the values of the refractive index calculated by leica at401 is in perfect match with the manually calculated refractive index if 780 nm wavelength and b constant given by leica is employed. at this stage, the issue is very complex because all documents including leica´s support answer are somehow wrong. the manual is wrong about 780 nm, the paper is wrong about the way at401 computes the refractive index, and the technical support ensures about validity of the sent paper. in the end, the manufacturer recognized that leica at401 physically uses 795 nm wavelength, and the programmed default computation of the group refractive index of air is wrong. it can be estimated by error analyses that a mistake of 15 nm in the wavelength causes an error of the group refractive index by approximately 0.3 ppm. this analyses was confirmed also by practical calculations see the 0.28 ppm difference in table 2. geoinformatics fce ctu 13, 2014 51 dvořáček f.: system software testing of laser tracker leica at401 table 2: analyses of the at401 refractive model atmospheric conditions group refractive index of air temperature pressure humidity calculated by at401 manually calculated [◦c] [hpa] [%] 795 nm 780 mm, bcalc 780 nm, bleica 19.50 985.811 58.0 1.0002679940 1.0002677165 1.0002680009 1.0002679940 23.30 984.373 46.8 1.0002641449 1.0002638716 1.0002641519 1.0002641449 2.3. improper updates of the group refractive index of air this error, improper updates of the refractive index of air in the memory of the instrument’s emscon server, is considered by the author to be the most tricky and serious one. it was never acknowledged by the technical support as a real error, rather it was called the standard behaviour of the system. it is everyone´s choice which behaviour of the instrument is metrologically acceptable but it should be always transparent and predictable. when measuring with atcontrol software, at401 is always asked to return actual refractive indices used for calculated corrected measured distances. the indices are then saved beside other measured data for any post-processing purposes. the author soon noticed that the group refractive index remains unchanged even if atmospheric parameters differ in time. only quite large jumps of the refractive index were registered from time to time. during a simple test, instrument’s external ntc temperature sensor was heated by hand to simulate significant changes of temperature (and refractive index). table 3: improper updates of the group refractive index of air atmospheric conditions group refractive index of air temperature pressure humidity change [◦c] [hpa] [%] ng [-] [ppm] update 22.6 984.354 48.3 1.000264774151 22.8 984.361 48.0 1.000264774151 0.00 no 22.9 984.383 47.7 1.000264774151 0.00 no 23.0 984.335 47.5 1.000264774151 0.00 no 23.3 984.373 46.8 1.000264144921 -0.63 yes 23.7 984.314 45.8 1.000264144921 0.00 no 22.8 984.170 49.0 1.000264719968 0.58 yes 23.0 984.200 48.4 1.000264719968 0.00 no 23.3 984.165 47.6 1.000264079414 -0.64 yes from the results (table 3) it was derived that the refractive index is updated only if a new value differs from an old one by 0.5 ppm or more. as investigated, this fulfils all measurements in automatic calculate mode but also all measurement in manual modes read only and off. in practice, if the refractive index is computed and handed over to the instrument, it is expected that the distance will be corrected by using this specific value. nevertheless, in most cases, the instrument will use some old number stored in its memory and no information about it is provided. such behaviour was confirmed by the technical support in leica´s testing software called geoinformatics fce ctu 13, 2014 52 dvořáček f.: system software testing of laser tracker leica at401 tpianalyzer. a print screen from this testing is available. similar explicit testing could have been achieved in atcontrol as well, but there was no need to do that anymore. the only functionality, which serves well and updates the refractive index immediately, is the command setenvironmentparams. in that case meteostation atc400 is turned off, atmospheric parameters are inserted manually and the device will compute the refractive index automatically by the default edlén´s procedure. however, these circumstances are not met very often during real measurement and this setting is inconvenient for practical use. by neglecting changes in the group refractive index of 0.5 ppm, simultaneously the changes in air temperature up to approximately 0.6 °c are not taken into account. this can be illustrated on the graph (fig. 2). temperature changes in time were simulated by spontaneous heating of the at controller 400 internal temperature sensor and the same length (31 m) was measured repeatedly. obvious jumps of distance measurement were caused by the discussed firmware error which is demonstrated by the size of jumps about 0.5 ppm. figure 2: improper updates of the refractive index of air by leica at401 up to a 1 ppm difference between two measurements of the same distance can be found due to improper updates of the refractive index of air. the results depend on the fact if the refractive index (or temperature respectively) is rising or dropping and what is the previous value of the refractive index stored in the instrument’s memory. an answer from leica´s technical support in switzerland stated that all this had been programmed in order to speed up the whole system and eliminate delays. the author of the article assumes that it cannot be the real reason. every time a new set of atmospheric parameters is observed (20 sec), the refractive index has to be computed anyway, but usually it is not saved as it should be. promises of the customer support to pass this issue onto the developer department had been made, but one and a half years later no change in the instrument’s system software has taken place. in the atcontrol application it was possible to overcome this error without any noticeable delay during the measurement process. in addition, it had to be arranged in a much more complicated way then leica could do it. correcting the error is possible only in manual geoinformatics fce ctu 13, 2014 53 dvořáček f.: system software testing of laser tracker leica at401 modes read only and off of the meteostation. after obtaining atmospheric parameters (from atc400 or elsewhere), wrong refractive indices (group and phase) are set. they have to differ from the correct ones at least 0.5 ppm. afterwards, correct refractive indices are sent to the server and, because they differ so much, they are accepted as new values. and so when the measurement starts, the first velocity correction is ensured to be computed with the proper refractive index. because of the facts stated above, it can be believed that atconrol is one of a few or maybe the only software solution which deals with the error and provides metrologically correct distance measurement with leica at40x instruments. 2.4. resolution of reading atmospheric parameters a temperature reading from the internal and external temperature sensor can be usually obtained only to a 0.1 °c resolution. for internal temperature sensor, which is heated up by electronics in the controller and gives up to 3 °c wrong results, it is understandable. for external ntc s2 sensor it is not the same case. in manual [11], leica describes the accuracy as 0.3 °c expanded uncertainty. the same sensor, ntc s2, can be purchased from hexagon metrology, but surprisingly with only 0.2 °c expanded uncertainty. this makes only the 0.1 °c standard deviation of the absolute reading, and the relative repeated reading is most likely even smaller. for the external sensor, it seems reasonable to enable the reading of the temperature to be with the 0.01 °c resolution. besides, jumps in measured distances caused by rough resolution of obtained digits are observed. to be faultless, it has to be said that temperature can be read to hundreds of °c when the absolute temperature drops below 10 °c (9,99 or lower). it seems only 3 valid digits are reserved in data type to store the value. an answer from leica was that a change is not needed and that it is simply how their system works. it is partly true because by default they neglect everything smaller than 0.5 °c by not updating the refractive index. but in spite of this behaviour, atmospheric pressure is read to 0.001 hpa and relative humidity to 0.1 %. the expanded uncertainties of the installed atmospheric sensor and the humidity sensor are only 1.0 hpa and 5%. it proves that the programmed firmware does not reflect the way atmospheric parameters effect refractive indices (measured distances respectively). if the system was really designed to work with 0.5 °c error in temperature, also 1.5 hpa of atmospheric pressure and 30% of relative humidity errors would be acceptable. in the author´s opinion, better resolution to 0.01 °c for temperature could be beneficial for measurement. on the other hand, the resolutions concerning pressure and humidity could be lowered to 0.1 hpa and 1% without any noticeable change in measurement results. in user-programmed applications, this issue cannot be resolved without an interference to the system software (firmware) which is out of the user´s reach. 2.5. summary of described firmware errors to demonstrate that the discovered errors are significant and should be taken into account by all current and potential users, a summary has been made. the purpose of the tables below (table 4, table 5) is to show how the errors may effect measuring with at40x in ordinary conditions – laboratory and outdoor. the impacts of errors depend on ambient atmospheric conditions and its gradients as well as the length to the target point. both extremes (min, max) of these error intervals are evaluated in the tables. the max. error is derived as the maximum possible influence for the whole instrument´s working range (<0; 160> m geoinformatics fce ctu 13, 2014 54 dvořáček f.: system software testing of laser tracker leica at401 distance, <0; 40> °c temperature, <500; 1100> hpa atmospheric pressure, <0; 95> % relative humidity). in laboratory conditions, stability of temperature ±0.25 °c at 20 °c and 30 m length are assumed. a reduced range of temperature <0; 30> °c is used for the outdoor evaluation. notice that an error in the group refractive index of air causes an error of about the same amount in the measured distance (km). each of the described issues itself potentially exceeds the manufacturer´s specification about the accuracy of the distance measurement (5 µm) [2]. table 4: impacts of at40x firmware errors [ppm] error max. error estimated common error laboratory outdoor min max min max min max refractive model 0.01 0.64 0.07 0.12 0.01 0.32 wavelength -0.31 -0.27 -0.28 -0.28 -0.28 -0.27 updating n -0.50 0.50 -0.25 0.25 -0.50 0.50 all together -0.80 0.87 -0.46 0.09 -0.77 0.55 table 5: impacts of at40x firmware errors [µm] error max. error estimated common errorlaboratory outdoor (160 m) (30 m) (160 m) min max min max min max refractive model 2 102 2 4 2 51 wavelength -50 -43 -8 -8 -45 -43 updating n -80 80 -8 8 -80 80 all together -128 139 -14 3 -123 88 the question occurs of how it is possible that no one has noticed or corrected the issues. some answers can be found. firstly, errors of refractive model and faulty wavelength have opposite mathematic signs and therefore partly eliminate each other in results. finally, testing and measuring with laser trackers is mostly done in world-class laboratories equipped with effective air conditioning. a well-controlled environment with a stable temperature during the whole measurement process fully or at least largely reduces the error of improper updates of the group refractive index. 3. measurement modes system laser trackers are still a relatively new family of products which are undergoing a rapid development and innovations. under these circumstances, minor errors in functionality will not be criticized and discussed here. a frequency of about a half year of issuing new firmware to leica at40x instruments is sometimes too long if an error is desired to be fixed. since the time of a major firmware upgrade to version 2.0.0.5053, new measurement modes have been introduced and all applications should have implemented them. the precise, standard, fast or outdoor mode can be chosen. only a brief general description of profiles is given by leica, no specifications are available. users may only guess that the difference consists of geoinformatics fce ctu 13, 2014 55 dvořáček f.: system software testing of laser tracker leica at401 measurement time (0.5 5 s) and also a tolerance to changing environmental conditions. according to the author´s experience, the precise mode was useable in the laboratory only up to 30 m, even though the description said it was meant to be used in uncontrolled environment. the outdoor mode, dedicated to use in “harsh environment” in field conditions, was practically not useable outdoors at all because it was not capable of determining longer distances. on the other hand, the standard mode, which should be used in a controlled environment, worked the best outside the laboratory. leica´s support answer was that if the users were fine with the standard mode in field conditions, they should have used it. the main idea of the outdoor mode is: “to make it less sensitive to disturbing light sources like reflections of the sunlight.” so with firmware 2.0.0.5053, 2.0.1.5449 and 2.1.0.4864 the standard mode was used, but since upgrading to the recent version 2.2.0.5975, a decrease of the distance range was observed. the standard mode stopped working for longer distances. the outdoor mode has operated up to about 120 m but it is far less than it was able to measure before – about 180 m. from our reseller nms slovakia (noncontact measuring systems), information was given that this firmware version dealt with unwanted reflections and so it might have affected also the sensitivity to the measuring signal. users do not have an option to make a downgrade in the tracker pilot application and, therefore, hexagon metrology service centre in prague had to be asked to do it. unfortunately, even after downgrading to version 2.1, similar problems with long distances above 100 m are present. it is believed that extended calibration of the instrument, which makes all laser beams perfectly coaxial, will return back the required capability of measuring long distances. measuring long distances in field conditions with the at401 is always challenging. the changeable atmosphere and the current hardware state make it difficult enough. additional uncertainties in the measurement mode system make it even more complicated to recognize what is the problem during measurement. the description of the error no. 113316 “distance measurement failed. general error upon a distance acquisition.” is not much helpful. reaiming to the target in order to receive a stronger reflected signal increases the chance for a successful measurement. author’s note the purpose of the article is far from an assumption that the author wanted to cause any harm to leica geosystems, hexagon metrology, nms slovakia or somehow discredit the qualities of the instrument itself. rigtc/vúgtk has been regularly using leica at401 for several years and may recommend this instrument to other users. on the other hand, if technical support does not reflect desired changes in the system software, spreading serviceable information among end users is what needs be done. the majority of customers are satisfied with the functionality of the instrument as it came from production. however, if someone wants to fully take advantage of the potential of such a great hardware, significant attention should be dedicated not only to the hardware, but to the firmware and software as well. the presented results of the paper are not dependent on the used testing software and can be verified by any user-programmed controlling application which enables saving of group refractive indices. contact the author of the paper in order to get the atcontrol software for free for further non-commercial scientific use. geoinformatics fce ctu 13, 2014 56 dvořáček f.: system software testing of laser tracker leica at401 references [1] gasner, g. a r. ruland. instrument tests with the new leica at401 [online]. stanfort, ca, usa: slac, 2011 [cit. 3.5.2014]. available: http://www.slac.stanford.edu/ cgi-wrap/getdoc/slac-pub-14300.pdf [2] leica geosystems. leica absolute tracker at401: asme b89.4.19-2006 specifications. leica geosystems, 2010-12-14, 2010. [3] jokela, jorma a pasi häkli. on tracebility of long distances . lisbon, portugal: international measurement confederation, 2009 [cit. 28.5.2014]. available: http://www. imeko2009.it.pt/papers/fp_100.pdf [4] lechner, jiří et al. nový český státní etalon velkých délek koštice. vúgtk, 2007 [cit. 29.5.2014]. available: http://www.vugtk.cz/odis/sborniky/jine/geos07/ paper/32_lecher_cervinka_umnov_kratochvil/paper/32_lecher_cervinka_umnov_ kratochvil.pdf [5] leica geosystems. formula for calculating the refractive index of ambient air used for the leica at401 of hexagon metrology. , 2013-02-13, 2013. [6] dvořáček, filip. nepřímé určení indexu lomu vzduchu pro výpočet fyzikální redukce elektronických dálkoměrů. geodetický a kartografický obzor. 2013, vol. 101, no. 10, s. 253-266. issn 1805-7446. [7] edlén, b. the refractive index of air [online]. metrologia. 1966, vol. 2, no. 2, s. 71-80 [cit. 28 november 2012] available: http://www.scopus.com/inward/record.url?eid= 2-s2.0-34250009147&partnerid=40&md5=769ccc0fb3a3cd511ba00a8b8cfb7e38 [8] ciddor, p. e. refractive index of air: new equations for the visible and near infrared [online]. applied optics. 1996, vol. 35, no. 9, s. 1566-1572 [cit. 28.11.2012] available: http://www.scopus.com/inward/record.url?eid=2-s2. 0-0030404182&partnerid=40&md5=cc21b031fafd06b6e8cd384decd7a103 [9] ciddor, p. e. a r. j. hill. refractive index of air. 2. group index [online]. applied optics. 1999, vol. 38, no. 2-9, s. 1663-1667 [cit. 28.11.2012] available: http://www.scopus.com/inward/record.url?eid=2-s2.0-0000063725&partnerid= 40&md5=cb84d76662da432a853142d88c93bd9f [10] rueger, j. m. refractive indices of light, infrared and radio waves in the atmosphere. university of new south wales, 2001. isbn 9780733418655. [11] leica geosystems. leica at401 user manual v. 2.0. , 2013. [12] leica geosystems. emscon 3.8: leica geosystems laser tracker programming interface programmers manual. revision 3 ed. switzerland: , 2013-07-27, 2013. [13] hennes, maria et al. state of the art baseline measurements by means of laser tracking – results from an interlaboratory comparison. praha euramet, 2011. geoinformatics fce ctu 13, 2014 57 http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-14300.pdf http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-14300.pdf http://www.imeko2009.it.pt/papers/fp_100.pdf http://www.imeko2009.it.pt/papers/fp_100.pdf http://www.vugtk.cz/odis/sborniky/jine/geos07/paper/32_lecher_cervinka_umnov_kratochvil/paper/32_lecher_cervinka_umnov_kratochvil.pdf http://www.vugtk.cz/odis/sborniky/jine/geos07/paper/32_lecher_cervinka_umnov_kratochvil/paper/32_lecher_cervinka_umnov_kratochvil.pdf http://www.vugtk.cz/odis/sborniky/jine/geos07/paper/32_lecher_cervinka_umnov_kratochvil/paper/32_lecher_cervinka_umnov_kratochvil.pdf http://www.scopus.com/inward/record.url?eid=2-s2.0-34250009147&partnerid=40&md5=769ccc0fb3a3cd511ba00a8b8cfb7e38 http://www.scopus.com/inward/record.url?eid=2-s2.0-34250009147&partnerid=40&md5=769ccc0fb3a3cd511ba00a8b8cfb7e38 http://www.scopus.com/inward/record.url?eid=2-s2.0-0030404182&partnerid=40&md5=cc21b031fafd06b6e8cd384decd7a103 http://www.scopus.com/inward/record.url?eid=2-s2.0-0030404182&partnerid=40&md5=cc21b031fafd06b6e8cd384decd7a103 http://www.scopus.com/inward/record.url?eid=2-s2.0-0000063725&partnerid=40&md5=cb84d76662da432a853142d88c93bd9f http://www.scopus.com/inward/record.url?eid=2-s2.0-0000063725&partnerid=40&md5=cb84d76662da432a853142d88c93bd9f comparing speed of web map service with geoserver on esri shapefile and postgis jan růžička institute of geoinformatics, vsb-tu of ostrava 17. listopadu 15, 708 33, ostrava-poruba, czech republic jan.ruzicka.vsb@gmail.com abstract there are several options how to configure web map service using several map servers. geoserver is one of most popular map servers nowadays. geoserver is able to read data from several sources. very popular data source is esri shapefile. it is well documented and most of software for geodata processing is able to read and write data in this format. another very popular data store is postgresql/postgis object-relational database. both data sources has advantages and disadvantages and user of geoserver has to decide which one to use. the paper describes comparison of performance of geoserver web map service when reading data from esri shapefile or from postgresql/postgis database. keywords: postgis, esri shapefile, geoserver, web map service, performation. introduction size of the spatial data grows every day and their management is more and more complicated. geographic information systems has moved from file based systems via database manged data to distributed data management. all three possible ways how to manage spatial data are still available and used. there are advantages and disadvantages in a case of all of them. for example is very easy to copy single file (group of files) to another user in comparison to copy whole database or whole distributed system. or it is very difficult to store large sized data to single file (for example there are limit about 4gb for esri shapefile format [4]). or it is very difficult to perform some queries on large data on single computer. for example kepka and ježek mentioned another disadvantage of postgis: “postgresql with postgis plays a role of the flagship of open source rdbms but there is just limited possibility of simple and fast queries visualisation” [5]. several advatanges of postgis are mentioned for example by stěhule [12] geoserver [1] can produce map outputs according to web map service specification [9]. lot of projects are based on esri shapefile format. lot of projects are based on postgresql/postgis system. the question on which was research based was: “how fast will be geoserver when serving maps according to wms if the source of data is postgresql/postgis table or esri shapefile?”. according to results of adam shreier work [11]. i expected that geoserver will be faster when dealing with esri shapefile than with postgresql/postgis table. i have discovered that this is not true and results show that there are other aspects that must be considered. i can conclude that running web map service with geoserver on esri shapefile or postgresql/postgis are comparable. geoinformatics fce ctu 15(1), 2016, doi:10.14311/gi.15.1.1 3 http://orcid.org/0000-0002-5970-1392 http://dx.doi.org/10.14311/gi.15.1.1 http://creativecommons.org/licenses/by/4.0/ j. růžička: comparing of web map service with geoserver and postgis methods the data from cosmc (czech office for surveying, mapping and cadastre) mentioned by med and souček in [8] that represents parcels in the czech republic were used for this research. the services were tested by horák, růžička and ardielli [3]. number of records was about 15 millions and number of vertices was about 200 milions. the esri shapefile files has about 8 gb together (cca 4 gb for geometry (shp file) and 4 gb for attributes (dbf)). the data were stored in esri shapefile and in postgresql/postgis table. for the esri shapefile was created quad tree index. shptree tool [7] with default options was used to build index. for postgresql/postgis table were created two tables one without spatial index and one with gist spatial index [6]. wms based on esri shapefile and on postgresql/postgis table have been prepared. there were three types of wms for postgis table, one for table without index, one for table with index and one for table with index connected via jndi technology [10]. following software and hardware configuration were used to perform testing. • intel(r) core™4 2.4ghz • 48 gb ram • ssd disks • ubuntu server 14.04 • geoserver 2.7 • postgresql 9.3 • postgis 2.1 • jmeter according to recommendations [2] i have eventually tuned postgrruzicka2.bbl.esql and run test on table with index again. tuning was as described in table 1. table 1: tuning parameters parameter without pg_tune with pg_tune default_statistics_target default 50 constraint_exclusion default on checkpoint_completion_target default 0.9 effective_cache_size default 32gb work_mem default 288mb wal_buffers default 8mb checkpoint_segments default 16 shared_buffers 128 mb 11gb max_connections 100 80 five tests were done on wms. each test took 4 hours. • esri shapefile geoinformatics fce ctu 15(1), 2016 4 j. růžička: comparing of web map service with geoserver and postgis • postgis without gist index • postgis with gist index • postgis with gist index on jndi • postgis with gist index on tuned postgresql results table 2 shows responses of web map service running on geoserver with different data sources. there are minimum, maximum and average response times in seconds. table 2: responses of web map service min (s) max (s) avg (s) std (s) postgis without index 6.8 51.0 16.9 2.1 postgis with index 0.2 12.0 2.7 1.7 postgis with index via jndi 0.2 15.9 3.1 2.0 postgis with index on tuned postgresql 0.1 12.0 2.4 1.5 esri shapefile 0.2 25.0 1.7 2.8 the figure 1 shows minimum response time for each configuration. the figure 2 shows maximum response time for each configuration. the figure 3 shows average response time for each configuration. table 3 shows number of responses of web map service running on geoserver with different data sources. the limits were specified 5 and 10 seconds. table 3: number of responses over limit % > 5 s % > 10 s postgis without index 100.00 99.8 postgis with index 9.39 0.09 postgis with index via jndi 16.45 0.47 postgis with index on tuned postgresql 5.62 0.02 esri shapefile 10.57 0.12 the figure 4 shows number of responses over 5 seconds in percents. the figure 5 shows number of responses over 10 seconds in percents. the figure 6 shows distribution of response time for each type of connection conclusion i can conclude that running web map service with geoserver on esri shapefile or postgresql/postgis are comparable. there are not significant differences in average time of geoinformatics fce ctu 15(1), 2016 5 j. růžička: comparing of web map service with geoserver and postgis figure 1: minimum time for response figure 2: maximum time for response figure 3: average time for response geoinformatics fce ctu 15(1), 2016 6 j. růžička: comparing of web map service with geoserver and postgis figure 4: number of responses higher than 5 s figure 5: number of responses higher than 10 s response (compare 2.8 and 2.4 seconds). there are significant differences in maximum response time (compare 25 s and 12 s). there are significant differences in number of responses over 5 s (compare 10 % and 5 %). i can conclude that connecting postgresql/postgis via jndi is slower than connecting postgresql/postgis without jndi. the difference is very significant mainly in number of responses over 5 s (compare 16 % and 5 %). i can conclude that tuning postgresql could speed up wms (compare 2.7 s and 2.4 s for average response time or 10 % and 5 % for number of responses over 5 s). i did not tested any other type of index than gist for postgresql/postgis. it may be done in future. i did not tested any other type of index than quad tree index on esri shapefile and i did not tested any than default options for building both indexes. geoinformatics fce ctu 15(1), 2016 7 j. růžička: comparing of web map service with geoserver and postgis figure 6: distribution of time responses references [1] geoserver.org. geoserver. url: http://geoserver.org/. [2] the postgresql global development group. performance optimization postgresql wiki. url: https://wiki.postgresql.org/wiki/performance_optimization. [3] jiří horák, jan růžička, and jiří ardielli. “performance testing of download services of cosmc”. in: geoinformatics fce ctu 10 (nov. 2013), pp. 5–14. doi: 10.14311/ gi.10.1. [4] environmental systems research institute. esri shapefile technical description. url: https://www.esri.com/library/whitepapers/pdfs/shapefile.pdf. [5] michal kepka and jan ježek. “web client for postgis—the concept and implementation”. in: geoinformatics fce ctu 11 (dec. 2013), pp. 63–76. doi: 10.14311/gi.11.5. [6] m. leslie. section 8: spatial indexing. url: http : / / revenant . ca / www / postgis / workshop/indexing.html. [7] mapserver. shptree. url: http://mapserver.org/utilities/shptree.html. [8] michal med and petr souček. “development and testing of inspire themes addresses (ad) and administrative units (au) managed by cosmc”. in: geoinformatics fce ctu 11 (dec. 2013), pp. 77–83. doi: 10.14311/gi.11.6. [9] ogc/iso. web map service. url: http://www.opengeospatial.org/. [10] oracle. java naming and directory interface (jndi). url: http://www.oracle.com/ technetwork/java/jndi/index.html. geoinformatics fce ctu 15(1), 2016 8 http://geoserver.org/ https://wiki.postgresql.org/wiki/performance_optimization http://dx.doi.org/10.14311/gi.10.1 http://dx.doi.org/10.14311/gi.10.1 https://www.esri.com/library/whitepapers/pdfs/shapefile.pdf http://dx.doi.org/10.14311/gi.11.5 http://revenant.ca/www/postgis/workshop/indexing.html http://revenant.ca/www/postgis/workshop/indexing.html http://mapserver.org/utilities/shptree.html http://dx.doi.org/10.14311/gi.11.6 http://www.opengeospatial.org/ http://www.oracle.com/technetwork/java/jndi/index.html http://www.oracle.com/technetwork/java/jndi/index.html j. růžička: comparing of web map service with geoserver and postgis [11] adam schreier. porovnání rychlosti mapového serveru geoserver při přístupu k různým datovým skladům [online]. bachelor’s thesis. 2014 [cit. 2016-04-05]. url: http : / / theses.cz/id/jqqz7x/. [12] pavel stěhule. “postgis pro vývojáře”. in: geoinformatics fce ctu 2 (dec. 2007), pp. 71–90. doi: 10.14311/gi.2.11. geoinformatics fce ctu 15(1), 2016 9 http://theses.cz/id/jqqz7x/ http://theses.cz/id/jqqz7x/ http://dx.doi.org/10.14311/gi.2.11 ________________________________________________________________________________ geoinformatics ctu fce 2011 10 queryarch3d: querying and visualising 3d models of a maya archaeological site in a web-based interface giorgio agugiaro1, fabio remondino1, gabrio girardi2, jennifer von schwerin3, heather richards-rissetto4, raffaele de amicis2 1 3d optical metrology unit, bruno kessler foundation, trento, italy {agugiaro, remondino}@fbk.eu web: http://3dom.fbk.eu 2 fondazione graphitech, trento, italy {gabrio.girardi, raffaele.de.amicis}@graphitech.it, web: http://www.graphitech.it 3 dept. of art and art history, university of new mexico, usa jvonschw@unm.edu 4 humlab, umea university, sweden heather.richards@humlab.umu.se keywords: 3d gis, 3d modelling, visualisation, maya abstract: constant improvements in the field of surveying, computing and distribution of digital-content are reshaping the way cultural heritage can be digitised and virtually accessed, even remotely via web. a traditional 2d approach for data access, exploration, retrieval and exploration may generally suffice, however more complex analyses concerning spatial and temporal features require 3d tools, which, in some cases, have not yet been implemented or are not yet generally commercially available. efficient organisation and integration strategies applicable to the wide array of heterogeneous data in the field of cultural heritage represent a hot research topic nowadays. this article presents a visualisation and query tool (queryarch3d) conceived to deal with multi-resolution 3d models. geometric data are organised in successive levels of detail (lod), provided with geometric and semantic hierarchies and enriched with attributes coming from external data sources. the visualisation and query front-end enables the 3d navigation of the models in a virtual environment, as well as the interaction with the objects by means of queries based on attributes or on geometries. the tool can be used as a standalone application, or served through the web. the characteristics of the research work, along with some implementation issues and the developed queryarch3d tool will be discussed and presented. 1. introduction and related works steady advances in the field of surveying, computing and digital-content delivery are changing the approach cultural heritage can be virtually explored: thanks to such new methodologies, not only researchers, but also new potential users like students and tourists, are having the chance to use a wide array of new tools to obtain (3d) information and perform analyses with regards to art history, architecture and archaeology. one useful possibility is offered by computersimulated 3d models, representing for example both the present and the hypothetical status of a structure. such digital models are sometimes linked to heterogeneous information and queried by means of (sometimes web-enabled) gis tools. in such a way, relationships between structures, objects and/or artefacts can be explored and the changes over space and time can be analysed. for some research purposes, a traditional 2d approach generally suffices, however more complex analyses concerning spatial and temporal features of architectures or archaeological sites require 3d tools, which, in some cases, have not yet been implemented or are not yet generally available. nowadays reality-based 3d models of large and complex heritage sites are generated using methodologies based on image data [1], range data [2], classical surveying or existing maps [3]. the choice depends on the required accuracy, object dimensions and location, surface characteristics, working team experience, project‟s budget, final goal, etc. however, more and more often the different methodologies are combined to derive multi-resolution data at different levels of detail (lod), both in geometry and texture, and exploit the intrinsic advantages of each technique [4, 5, 6]. ________________________________________________________________________________ geoinformatics ctu fce 2011 11 figure 1: examples of access to 3d geometric data linked to external information: [left] google earth allows retrieval of information by clicking on selected objects, but no multi-criteria queries; [right] a gis environment (esri arcscene ř.3) allows more elaborate queries, but lacks in visualising “heavy” realitybased models. one interesting opportunity offered by reality-based 3d models is to use them as visualisation “containers” for different kinds of information. given the possibility to link their geometry to external data, 3d models can be analysed, split in their sub-components and organised following proper rules. this is, for example, the case of (modern) buildings, where their geometry, topology and semantic information is organised in building information models (bims). by extending the concept of bims to the framework of cultural heritage, it is easy to understand that these properties/capabilities could facilitate data organisation, storage, use and communication. powerful 3d visualisation tools already exist, but often they implement no or only limited query functionalities for data retrieval, possibly web-based. queries are actually typical functions of current gis packages, which very often fall short when dealing with detailed and complex 3d data ( figure 1). probably, one of the most well-known examples is google earth: the user can browse through the geospatial data and get, when necessary, external information by clicking on the selected object, or by activating a selectable layer. however, more complex, interactive queries are not implemented: it is not possible, for instance, to select all structures in a city/site built between a certain interval of time (i.e. 350-400 ad), or planned by a certain architect (provided this information is given and linked to the geometric models). different authors have proposed solutions for 3d data management and visualisation, possibly on-line [7, 8, 9, 10, 11, 12] but, no unique, reliable and flexible package/implementation is commercially available nowadays. when it comes to data modelling and storage, citygml [13] represents a common information model for the representation of 3d urban objects, where the most relevant topographic objects in cities and their relations are defined, with respect to their geometrical, topological, semantic and appearance properties. unfortunately, even citygml‟s lod4, the highest level of detail, is not meant to handle highresolution, reality-based models, which are characterised by complex geometry and detailed textures. moreover, citygml is conceived for modern buildings, and – understandingly – not for archaeological models/sites, which generally differ in terms of scale and scope. regarding visualisation, some development tools exist in the videogames domain and can be adapted to support 3d geodata (e.g. unity3d, osg, ogre3d, opensg, 3dvia virtools, etc.) but with limited capabilities when it comes to displaying large and complex reality-based 3d models. when it comes to (3d) web services, some initial experiences were carried out [13], but, again, a standard and widely accepted solution does not exist as of today. keeping in mind the mentioned approaches and the existing limitations, an ideal tool able to perform analyses in the framework of architectural and archaeological cultural heritage should be able to perform (at least) the following four tasks: a) handle fully 3d multi-resolution datasets, b) allow queries based both on geometries and on attributes, c) support 3d visualisation/navigation of the models, d) permit both local and on-line access to the contents. this article introduces and describes the queryarch3d tool, which is the result of a project aiming at creating a tool chain for a web-based visualisation and interaction with a reality-based multi-resolution 3d model, i.e. aiming at the above-mentioned four characteristics. as test field, the archaeological maya site in copan (honduras) was chosen. copan is an unesco world heritage site and one of the most thoroughly investigated maya cities, located on the southern periphery of the ancient maya world, in today‟s honduras. the site contains over 3700 structures. thanks to excavation studies, a dynasty of sixteen kings ruling between the 5th and the 9th cent. a.d. could be identified. temple 22 is one of the most representative structures. it was once three storeys high and covered with plaster, paint and sculpture [15]. however, today only the first storey remains, the upper facades and sculptures have collapsed making it ________________________________________________________________________________ geoinformatics ctu fce 2011 12 is difficult for visitors to imagine this temple without the aid of 3d reconstructions. different types of data exist and have been created during the course of time: the first recorded surveying of the archaeological site dates back to the 19th cent. [ 16, 17] as schematic maps of the principal group were drawn. more detailed maps were successively published in 1896) [18] and 1947 [19]. from the 1980s are maps and drawings of the principal group at scales of 1:100 and 1:200 [1ř], while archaeologists on the proyecto arqueológico copán (pac i) published maps of the valley‟s residential sites at a scale of 1:2000 [21]. gis data of copan has been created only recently by maca [22] and richards-rissetto [23]: pac i maps (covering 24 km2) were digitised, georeferenced and integrated with more recently available large-scale maps to create a gis for the entire copan valley, containing data of archaeological buildings, hydrology, contour lines and a digital elevation model (dem) of the valley. in 2009, high-resolution 3d data were acquired using terrestrial photogrammetry, uav platforms and terrestrial laser scanning [24]. using and combining all these data, the 3d contents for the web-based visualisation and interaction queryarch3d tool were created. 2. the queryarch3d tool as mentioned in the previous section, no tool currently exists which is able to guarantee the four identified properties. thus the goal of this research work is to implement a prototype, called queryarch3d, which can fulfil all aforementioned tasks. queryarch3d is tailored to the needs of researchers working at the copan archaeological site, but basic concepts can be extended and generalised to similar contexts. before proceeding with the development of the queryarch3d tool, a general check was performed on all available data (spatial and non-spatial) for potential incompatibilities (different formats, different modelling paradigms, etc.), geometric and/or semantic inconsistencies. the development of queryarch3d was then split into four successive steps: i. definition of a conceptual schema for lod, adoption of geometric and semantic hierarchies, ii. check and structure existing data accordingly, iii. data integration and homogenisation, iv. development of the visualisation and query front-end. 2.1 step i – levels of detail and hierarchies in order to cope with the complexity of multiple geometric models, a conceptual scheme was defined to handle multiple levels of detail, which are required to reflect independent data collection processes, levels of detail facilitate in fact efficient visualisation and data analysis. for the copan site, four levels of detail were identified for the structures: the higher the lod rank is, the more detailed and accurate the model is. the used levels of detail are (figure 2): lod1 contains simplified 3d prismatic entities with flat roofs. all lod1 models were obtained starting from the gis data [23], i.e. a 2d shapefile with attribute data containing also the structures‟ height. polygon features were first triangulated and then extruded. lod2 contains 3d structures for some exteriors of the buildings. the sub-structures (e.g. walls, roofs or external stairs) can be identified. for the lod2, hypothetical reconstructions models obtained in 3d studio max were used. lod3 adds the interior elements (rooms, corridors, etc.) to the structures. some simplified, reality-based models can be optionally added, both to the interior and to the exterior of the structures. for the copan dataset, the interior rooms of the hypothetical reconstruction models of lod2 were used, plus some reality-based simplified models of two stelae, the corner mask and the interior doorway of temple 22. these reality-based models were obtained from the more detailed ones (acquired in 2009 [24] and used in lod4) by applying mesh simplification algorithm. the geometric simplification was in the order of 30% of the original models. lod4 contains structures (or parts of them) like high-resolution geometrical 3d models. these models were further segmented into sub-parts. currently, lod4 contains the segmented models of stela a and stela b, as well as the corner mask and the interior doorframe of temple 22. the adoption of a lod-dependent hierarchical schema required the contextual definition of geometric and semantic hierarchical schemas. this was achieved by an accurate identification and description of the so-called “part-ofrelations”, in order to guarantee a spatio-semantic coherence [25]. at the semantic level, once every structure is defined (e.g.: what is a temple or a palace? how is it characterised? what are its components?), its entities are represented by features (stairs, rooms etc.) and they are described by attributes, relations and aggregation hierarchies (part-of-relations) between features. if a query on an attribute table is carried out for a certain roof, the user should retrieve information no t only about the roof itself, but also about which structure contains that roof. this is exemplified in the hierarchy shown in figure 4 (left), which is based on a copan temple. however, the semantic hierarchy needs to be linked to the corresponding geometries, too: if a query on an attribute table is carried out for a certain roof, not only the linked attributes should be retrieved, but – if needed – also the corresponding geometric object. this operation requires however to structure the geometric models in compliance with the hierarchy. ________________________________________________________________________________ geoinformatics ctu fce 2011 13 2.2 step ii – data check and structuration in case of the lod1 models, a data aggregation was necessary in order to reduce the ca. 19000 polygons to the current ca. 3700 structures. data aggregation was performed on the basis of the existing attributes, after some manual editing was carried out to check geometries for topology errors (overlaps and gaps) and to correct and normalise the attached attributes table. an example is given in figure 3. for lod2, lod3, lod4 the segmentation of the models into sub-parts was carried out according to the hierarchical schemes in order to perform a proper classification and the subsequent assignment of attributes to each segment. an example is given in figure 4 (right). figure 2: different levels of detail (lod) in the queryarch3d tool for the temple 22 structure. clockwise from top-left: lod1 with prismatic geometries, lod2 with more geometric details (only exterior walls), lod3 with interior walls/rooms and some simplified reality-based elements, lod4 with high-resolution reality-based 3d models. 2.3 step iii – data homogenisation and integration once all data were checked and structured, data homogenisation and integration could be performed: all geometric models were aligned in order to spatially “fit” together (e.g. the reality-based corner mask with the temple 22 models) and georeferenced, in order to share the same coordinate system. a tin-based digital terrain model for landscape contextualisation was created using gis data [23]. to all structures objects were finally given an elevation value, taken from the dtm. with respect to all available non-spatial tabular data (mainly coming from ms access databases, text files, filemaker pro databases), non-spatial tabular data (mainly coming from ms access databases, text files, filemaker pro databases), they were checked, restructured and integrated. in order to reduce data-formats heterogeneity, postgresql was chosen as dbms where to store all data. moreover, thanks to its postgis extension, spatial data also could be stored in the same database, providing a valuable (and unique) data management system. figure 3: aggregation of geometric features at lod1: from ca. 19000 polygons [left] to ca. 3700 structures [right]. ________________________________________________________________________________ geoinformatics ctu fce 2011 14 figure 4: semantic and geometric hierarchies: [left] example of semantic segmentation for a typical copan temple. [right] example of geometric segmentation of the interior doorway of temple 22 for a lod4 model. 2.4 step iv – front-end development for data administration purposes, a simple front-end, based on microsoft access 2010 and relying on access runtime 2010, was developed and distributed to the project members. the front-end connects directly to the postgresql server and allows update operations on the data currently stored. for the interactive 3d visualisation and query front-end, the game engine unity3d, an integrated authoring tool for creation of 3d interactive content, was adopted. applications can be developed for all major platforms as well as for web sites requiring in the latter case a free plugin to access embedded contents ( figure 5). moreover, unity can communicate with external databases and retrieve data when needed, e.g. by means of a php interface between unity and postgresql. as soon as the application is run, all the remotely stored information like structure types, structure names, year of construction etc. is retrieved and assigned to the respective geometric objects. for the navigation in the 3d environment, three modes were implemented ( figure 6): a) an aerial view over the whole archaeological site, where only lod1 models are shown; b) a ground-based walkthrough mode, where the user can approach and enter any structure up to lod3 (provided such a model exists, otherwise a lower-ranked model at lod2 or lod1 is visualised); c) a detail view, where lod4 models are presented. inside the 3d environment front-end, the user can perform attribute queries over the whole dataset (e.g. “highlight all structures built by a ruler x”; “highlight all altars”; “highlight only stelae belonging to group y and built in year z”). the user can also perform standard queries with a mouse click: once a geometric object is selected, the related attribute values are retrieved from the external database and shown in a text box (figure 7). the amount of retrieved information depends on the lod: for lod1 structures, only global attributes are shown, while for higher lod also the detailed information is shown, according to the selected segment. finally, distances between two objects in the 3d world can be measured, and line-of-sight tests between two selectable objects can be performed. figure 5: front-ends for the queryarch3d tool: [left] data administration gui (edit/update) via microsoft access 2010 runtime platform. [right] the web-based interactive visualisation and query front-end for data exploration. ________________________________________________________________________________ geoinformatics ctu fce 2011 15 figure 6: navigation modes in queryarch3d: [left] aerial view with lod1 models only, [right] walkthrough mode, with mixed lod1-to-lod3 models. an example of the detail view for lod4 models is given in figure 2 (bottom-right). figure 7: different data interrogation modes in queryarch3d. [top-left] query by attributes with results displayed in terms of geometries. [top-right and bottom] lod-dependent queries on geometric models: at lod1, only global attributes are shown, for lod2 to lod4 models also sub-parts can be selected and more detailed information is retrieved and visualised. 3. conclusions and outlook this article presented the development of the queryarch3d tool. the goal of the queryarch3d is to address some open issues regarding multi-resolution data integration, access and visualisation in the framework of cultural heritage. four requirements, considered of crucial importance when dealing with archaeological and architectonical reality-based 3d models, were set and inserted in the queryarch3d tool: a) the capability to handle multi-resolution models, b) the capability to query geometries and attributes in the same virtual environment, c) the capability to allow 3d data ________________________________________________________________________________ geoinformatics ctu fce 2011 16 exploration, d) the capability to offer on-line access to the data. the maya archaeological site of copan in honduras was chosen as a test field, due to its extent (ca. 24 km2), its considerable number of mapped structures (over 3700) and the availability of several heterogeneous datasets. in order to integrate the existing data in a coherent way, different levels of detail (lod) were defined while the 3d models were manually segmented paying attention to both semantics and geometry. finally all geometric models were integrated with attribute data gathered from several external sources. the integrated data are stored in postgresql, while the interactive 3d visualisation is achieved using the game engine unity3d, which is connected to the database by means of a php script. the front-end visualisation allows the user to navigate interactively in a virtual environment, where the existing archaeological structures can be visualised and queried in a lod-dependent way. according to the observer‟s distance from the object, the visualised geometry varies from low-resolution prismatic geometries to high-resolution meshes. at the same time, the amount of data retrieved from the database is dependent on the lod: just global information is shown at a coarse lod, while more detailed attributes are shown at higher lod. some spatial functions (like distance measurements and visibility analysis) have been also implemented. the 3d multi-resolution model is now accessible to the project members via web for visualisation, studies, interaction, queries, educational purposes as well as further tests and evaluation. future developments and improvements for the queryarch3d tool will add more spatial functions (beside the already existing distance measurements and visibility analysis) and more models at lod2 to lod4, consistently enriching the attributes related to the entities. moreover, most of the structures are neither textured nor chromatically characterised. the very first improvement of the buildings will take this into account. it should be possible also to distinguish real structures from virtually reconstructed ones. regarding the database storage system, some functions should be added and/or improved. just to name an example, postgis itself offers support to store 3d features, but all gis functions are still substantially 2d, i.e. 3d “out-of-the-box” spatial analysis tools are still to come. besides the building structures, only a coarse tin is used and objects are simply placed on top of it leading to some geometric inconsistencies in some places. a better integration of high-resolution models into a coarser dtm should be therefore taken into account, as proposed for instance in [26]. adding more high-resolution models into an on-line virtual environment requires good hardware and internet connections. proper strategies will have to be tested and adopted to keep the user experience acceptable as the number of models (and polygons) grows. 8. acknowledgements this project was supported by a national endowment for the humanities digital humanities start-up grant (usa) given to the university of new mexico's department of art and art history, and also by the alexander von humboldt foundation (germany), the humlab, umea university (sweden) and the foundation for the advancement of mesoamerican studies (usa). the authors would like to thank maurizio forte and fabrizio galeazzi (uc merced, usa) and laura ackley, who cooperated to build the 3d studio max model of temple 22, and alessandro rizzi (fbk trento), who helped in the processing of the reality-based 3d models. 9. references [1] remondino, f., el-hakim, s., girardi, s., rizzi, a., benedetti, s., gonzo, l.: 3d virtual reconstruction and visualisation of complex architectures-the 3d-arch project. iaprs&sis vol. 38(5/w1), 2009. [2] vosselman, g., maas, h.: airborne and terrestrial laser scanning. crc, boca raton, isbn 978-1904445-87-6, 2010. [3] yin, x., wonka, p., razdan, a.: generating 3d building models from architectural drawings: a survey. ieee comp. graph. appl., 29(1), pp. 20-30, 2009. [4] grün, a., remondino, f., zhang, l.: the bamiyan project: multi-resolution image-based modeling. recording, modeling and visualisation of cultural heritage, taylor & francis/balkema, isbn 0 415 39208 x, pp. 45-54, 2005. [5] guidi, g., remondino, f., russo, m., menna, f., rizzi, a., ercoli, s.: a multi-resolution methodology for the 3d modeling of large and complex archaeological areas. international journal of architectural computing, 7(1), pp. 40-55, 2009. [6] takase, y., yano, k., nakaya, t., isoda, y., tanaka, s., kawasumi, t., et al.: virtual kyoto – a comprehensive reconstruction and visualisation of a historical city. proceedings of the 9th conference on optical 3-d measurement techniques, vienna, austria, vol. i, pp. 11-20, 2009. [7] khuan, t.c., abdul-rahman, a., zlatanova, s.: 3d spatial operations in geo dbms environment for 3d gis. gervasi and gavrilova (eds.), iccsa 2007, lncs 4705, part i, berlin, pp. 151-163, 2007. ________________________________________________________________________________ geoinformatics ctu fce 2011 17 [8] kibria, m.s., zlatanova, s., itard, l., van dorst, m.: geoves as tools to communicate in urban projects: requirements for functionality and visualisation. lee and zlatanova (eds.), 3d geo-information sciences, lng&c, springer verlag, pp. 379-412, 2009. [9] conti, g., simões, b., piffer, s., de amicis, r.: interactive processing service orchestration of environmental information within a 3d web client. proceedings of gsdi 11th world conference on spatial data infrastructure convergence, rotterdam, the netherlands, 2009. [10] apollonio, f.i., corsi, c., gaiani, m., baldissini, s.: an integrated 3d geodatabase for palladio’s work. international journal of architectural computing, vol. 2(8), 2010. [11] manferdini, a., remondino, f.: reality-based 3d modeling, segmentation and web-based visualisation. procedings of euromed 2010, lncs 6436, springer verlag, pp. 110-124, 2010. [12] de luca, l., busarayat, c., stefani, c., veron, p., florenzano, m.: a semantic-based platform for the digital analysis of the architectural heritage. computers & graphics. vol. 35(2), pp. 227-241, 2011. [13] kolbe, t.h.: representing and exchanging 3d city models with citygml. lee, zlatanova (eds.), 3d geoinformation sciences, springer verlag, 2009. [14] schilling, a., neubauer, s., zipf, a.: putting gdi-3d into practice: experiences from developing a 3d spatial data infrastructure based on opengis standards for the sustainable management of urban areas. fig commission 3, international workshop on spatial information for sustainable management of urban areas. mainz, germany, 2009. [15] von schwerin, j.: the sacred mountain in social context: design, history and symbolism of temple 22 at copán, honduras. ancient mesoamerica (22:2). in press, 2011. [16] graham, i.: juan galindo, enthusiast. estudios de cultura maya. vol. iii, pp. 11-35, mexico, d.f., 1963. [17] stephens, j. l.: incidents of travel in central america, chiapas and yucatan. 2 vols., new york, usa, 1841. [18] gordon, g.b.: prehistoric ruins of copan, hondur as. a preliminary report of the explorations by the museum, 1891-1895. memoirs of the peabody museum of american archaeology and ethnology, harvard university, vol. i, pp. 1-48, 1896. [19] strömsvik, g.: guidebook to the ruins of copan. carnegie institution of washington, publ. no. 577. washington, d.c., usa, 1947. [20] hohmann, h., vogrin, a.: die architektur von copán (honduras). vermessung plandarstellung untersuchung der baulichen elemente und des räumlichen konzepts. 2 bände, akademische druckund verlagsanstalt graz, 1982. [21] fash, w.l., long, k.z.: mapa arqueológico del valle de copán. introducción a la arqueología de copán, honduras tomo iii. tegucigalpa, d.c. proyecto arqueologico copán. secretaria de estado en el despacho de cultura y turismo, 1983. [22] maca, a. l. jr.: spatio-temporal boundaries in classic maya settlement systems: copan's urban foothills and the excavations at group 9j-5. ph.d. thesis, department of anthropology, harvard university: cambridge, massachusetts, usa, 2002. [23] richards-rissetto, h.: exploring social interaction at the ancient maya site of copán, honduras: a multi-scalar geographic information systems (gis) analysis of access and visibility. ph.d. dissertation, university of new mexico, 2010. [24] remondino, f., grün, a., von schwerin, j., eisenbeiss, h., rizzi, a., sauerbier, m., richards-rissetto, h.: multisensors 3d documentation of the maya site of copan. proceedings of 22nd cipa symposium, kyoto, japan, 2009. [25] stadler, a., kolbe, t.h.: spatio-semantic coherence in the integration of 3d citymodels. proceedings of the 5th international symposium on spatial data qualityissdq 2007, enschede, netherlands, isprs archives, 2007. [26] agugiaro, g.: advanced methodologies of acquisition, integration, analysis, management, visualization and distribution of data in the framework of archaeological and architectonical heritage. ph.d. thesis, università di padova & technische universität berlin, 2009. http://paduaresearch.cab.unipd.it/2342/ web client for postgis—the concept and implementation michal kepka, jan ježek geomatics section, department of mathematics, faculty of applied sciences, university of west bohemia, univerzitní 8, 30614 plzeň {mkepka, jezekjan}@kma.zcu.cz abstract postgresql with postgis extension plays one of the major roles in many complex gis frameworks. there exist many possibilities how to access such data storage, but most of them might be seen as not simple for new users. in this paper we would like to introduce the concept of the implementation of a web based postgis client application. the main emphasis of described solution is placed on simplicity and straightforward approach for visualisation of general sql queries. introduction fast and online visualisation of complex sql queries, that includes a spatial content without any scale limitation, is a challenge that can be hardly fulfilled with the software tools that are available nowadays. postgresql with postgis plays a role of the flagship of open source rdbms but there is just limited possibility of simple and fast queries visualisation. widely used gis applications offer certain solutions, but their approaches have some limits and can be seen as too complex. we have designed the concept and implementation of software that will make output of sql queries available through a web services. kml format was chosen as output data format as it is the most common format in the internet environment. we have also designed a mechanism that helps to simplify the output data so that even large scale results can be easily cartographically visualised. server-side java application with rest api has been implemented for that propose. the application accepts user’s sql query, executes it in the database and provide the http service with results in kml format. such kml is recursively generated according to bounding box of user request and provides relevant level of detail of particular data. benefits of the developed application are in a simple access and straight forward visualisation with the utility of sql together with comprehensive list of spatial functions available in postgis. the developed application can be useful for data mining and analyses as well as for the education proposes in the field of the spatial databases. current state let us consider a situation that a vector data are stored in a spatial database. a user manages this database by an administration software tool, runs analysis in form of sql query and receives results as ordinary table including the situation that geometry is the main result of geoinformatics fce ctu 11, 2013 63 kepka, m., ježek, j.: web client for postgis the analysis. if a user wants to see geometry results in an appropriate map view there are several ways how to preview spatial data. one way is connecting this database to an ordinary gis application. we can mention one of very popular gis applications qgis. qgis can connect to several types of databases and add a table with the geometry column as a new layer. spatial features of this layer can be filtered by a search or by a query [1]. second common way is publishing the data through a web service. there can be mentioned geoserver [2] and mapserver [3] as examples of very popular server side software for serving spatial data through the web. a database can be connected to these products and then there is a possibility to publish particular data through standardised services (wms, wfs [4]). such a solution is very useful, but if we consider arbitrary queries, we will have to prepare specific configuration for each of those. geoserver and mapserver support lots of standards for publishing spatial data e.g. wms, wfs and wcs. another way is using some specialized desktop administration software for viewing and managing spatial data in the database. one of these is georaptor, what is an extension for oracle sql developer. georaptor enables easy adding table as a layer into spatial view where features are drawn in coordinate frame [5]. this solution provides an easy way how to preview spatial data directly in the administration tool where sql queries are run, but usually doesn’t enable adding any underlying map. when we consider large volume of data we need also to use advanced visualisation algorithms into account (such as clustering and simplification [6]). another possibility is to render data in advance to raster tiles and then provide the output in the raster format. such a solution improves the data availability, but the drawback is in more complicated data update. software concept our use case can be described as follows. we have lots of large datasets stored in the postgis database and we need to run analyses and effectively publish large results of spatial queries. it means that we need an application that would satisfy given requirements – accepting queries in form of sql and quickly displaying of results of spatial queries. the spatial results mean to be displayed directly in a map window or exported in some common gis format (e.g. shapefile, kml, geojson) without any additional configuration. the main goals of the proposed solution are: 1. analyses will be performed directly by database management system (dbms). 2. application will accept sql. 3. results of analyses with geometry columns will be produced in common gis format or displayed directly in a map window. 4. large data obtained from spatial queries will use convenient type of simplification to improve display performance. 5. application will minimize number of steps on the process from analysis draft to display result. geoinformatics fce ctu 11, 2013 64 kepka, m., ježek, j.: web client for postgis if spatial data are stored in the database it is more convenient to access and process the data directly by sql commands rather than by using third party software. usually the sql commands can be run in an administration client and query results are opened directly in a form of tables. spatial databases mostly have conversion functions to convert the geometry data type to several gis formats. afterwards, the results can be exported from the database and can be opened in common gis applications. several gis formats were considered during design phase of the application. the gml format was taken into account because it is an ogc standard. gml format is very complex and is focused more on description of geographic objects than on their visualizing. esri shapefile was excluded because it is a set of several files and it would bring more administration during files transfer. for the purpose of the application we have chosen the kml format as the output format for the geometry column. the reasons are: • kml is an ogc standard for spatial data • kml is widely supported by gis application and very popular among amateur users at this time • kml has an element called networklink for bi-directional communication between server and client application the developed application can use postgis function st_askml() to convert the geometry data type into the kml format [7]. basically there are two concepts of such a kml output. first is a common static kml file with specific part of query result. this static kml file will contain geometry, other than geometry attributes and style how the geometry should be drawn. second option is dynamic kml file that will contain description of data and networklink element. the networklink element enables bi-directional communication between a server and a client application [8]. such an element contains url of the server, the bounding box (bbox) for the selection of appropriate part of the result that fits current map window and finally parameters describing conditions for data refreshing. an example of networklink element with defined bbox parameters of current map window is shown in example 1. the designed application can be divided into three main parts – database, server-side and thin web client. design of the parts will be detailed described below. database part the database part of designed application is the core part for storing and analysing of data. this part consists of database schema, data model and stored functions. the schema guarantees keeping tables with query results separate from other schemas with spatial data. designed data model contains one metadata table, stored procedures and tables that contains query results. fig. 1 shows designed metadata table schema and an example of one query result table. metadata table is designed to contain information of processed sql query and values that facilitate selecting of the appropriate part of the result matching the map window which user is looking at. stored procedures control the launching of given sql query and create new entry to metadata geoinformatics fce ctu 11, 2013 65 kepka, m., ježek, j.: web client for postgis example 1: source code of networklink element < n e t w o r k l i n k > < name > g e o m e t r y features < v i s i b i l i t y >1 < open >1 < d e s c r i p t i o n > f e a t u r e from d a t a b a s e s p e c i f i e d by sql query < r e f r e s h v i s i b i l i t y >0 < f l y t o v i e w >0 < link > < href > http :// w h a t s t h e p l a n . eu / a n a l y s t \ _p4b / k m l s e r v l e t ? q u e r y i d = 1 3 7 5 3 5 6 2 1 4 8 2 2 < / href > < r e f r e s h i n t e r v a l >2 < v i e w r e f r e s h m o d e > onstop < v i e w r e f r e s h t i m e >1 < v i e w f o r m a t > bbox =[ b b o x w e s t ] ,[ b b o x s o u t h ] , [ b b o x e a s t ] ,[ b b o x n o r t h ] table after finishing of sql. procedures hold processing of one sql query as one transaction to keep consistency of the model. main procedure returns identifier of query result to the server-side part of application. another function enables update of sql and finally there is a function that deletes result table and relevant entry in metadata table by given identifier of query result. server-side part server-side part of designed application manages and controls the whole application. this part can be further divided into several modules that are shown in fig. 2 below. there is a module called connector that manages connections to the database with using connection pool mechanism. module receiver receives sql queries from user interface, checks their correctness as select query and transfers they to created function in the database through the connector module. receiver receives parameters of the map window to select appropriate part of result. sql for data selection are compiled in sql creator module. kml creator module receives from database the identifier of finished query result and returns it to the user interface. result modeller module receives features of query result requested by user interface for visualization and prepares list of objects for kml output files. kml creator module is designed to use freemaker template (see [8] for freemarker details). the publishing module of the application has been designed according to the representational state transfer (rest) software architecture [9]. graphical user interface graphical user interface (gui) is another part of the designed application. this part keeps responsibility for communication with users, enables sql queries insertion and simple result visualization in the map window or downloading results in form of kml files. gui has been designed as a web thin client with basic functionality for sql handling and result visualization. geoinformatics fce ctu 11, 2013 66 kepka, m., ježek, j.: web client for postgis figure 1: metadata table with one query result table simplification algorithm as has been already mentioned we take into consideration also the large datasets and large query results. the utilization of some simplification method for geometric parts of query results was considered already in first draft of the application. the query results could be simplified for an effective visualization after the analyses is made. it is necessary to select that kind of simplification methods that is not too much time-consuming and nevertheless can substantially facilitate visualization of results. we can consider several methods for simplification on account of the datasets that have been available during the application development. figure 2: designed schema of server-side part of application geoinformatics fce ctu 11, 2013 67 kepka, m., ježek, j.: web client for postgis k-means method this method [10, 11] divides the set of data into predefined number of clusters. the clustering starts with definition of centroids. each point is classified to the nearest centroid. new centroids are computed according to current shapes of clusters after classifying of all points. this procedure is iteratively repeated till reaching defined convergence criterion. convergence criterion can be minimal change of clusters or minimal move of centroids between steps. the advantages of this method are their simplicity, short running time and the existence of k-means clustering module to postgresql database [12]. the disadvantage is that solution depends on initial choice of first centroids. several modifications of the basic method exist that refine diverse characteristics of basic method [13, 14]. optimized k-means method already refines cluster centroids during classification of each point. k-medoids method selects existing points as clusters centroids that are closest to a precise centroid. fuzzy k-means method determines degree of membership in clusters. spherical k-means method differs from the previous method in method of cluster creation. this method starts with all points in one cluster. the first cluster is progressively divided into defined number of cluster. geohash method geohash is a geocode system that subdivides space into grid shape according to geographic coordinates [15, 16]. geohash uses the region quad tree data structure and then assigns string codes to particular quads by using base32 character set. length of the geohash code defines precision of the encoding. the advantage of this method is arbitrary precision and unique identification of a cell. postgis contains method that returns geohash representation of the geometry [4]. modified facility location algorithm in accordance to the previous development, a clustering method developed at the department of computer science at the university of west bohemia was selected for the point datasets [11]. this method is based on a local search algorithm for the facility location but with several modifications. this clustering is not parameterized by a number of clusters as the original method is but by the facility cost. the higher facility cost faster reduces the amount of data and produces fewer clustering levels. another modification is that this method stores all intermediate clusters and builds hierarchy of them. method makes a single linear pass over the data and builds a hierarchy of clusters. each cluster is represented by point nearest to centre of cluster, this point is called representative. the clustering is done according to given criteria. the basic criterion is the geometrical distance of points, but other point attributes can be used. if the number of representatives reaches defined count, representatives are processed and representative of the cluster of these representatives is selected. the clustering ends if all points are processed and all representatives on higher levels can’t be further clustered. storing clusters in database the selected clustering method saves created hierarchy of clusters in separate file for each level of hierarchy on hard disk. because the developed application works with data stored in the database therefore it was necessary to design a way how to save the hierarchy to the database. several possibilities were considered and the expected properties were satisfied geoinformatics fce ctu 11, 2013 68 kepka, m., ježek, j.: web client for postgis properly by the modified preorder tree traversal algorithm [17]. the whole hierarchy is stored in one table with fixed number of columns. only identifiers of points that refer to a query result in another standalone table are stored in this table. table for the hierarchy contains the point identifier, the identifier of cluster representative, the level of hierarchy and the left and right value of index. the point indexing enables selection of the tree or a subtree in one query. another benefit of this index is that the number of children of one node can be easy calculated from a difference of right and left value. an example of the hierarchy is shown in fig. 3 below with the direction of indexing. fig. 3 shows the reason, why it is necessary to store identifier of point and parent together with the level number. it can be seen that the same point-parent pair could be found on several levels in the hierarchy. figure 3: example of cluster hierarchy table 1 shows storing of part of the hierarchy displayed in fig. 3. points from the table can be selected by several ways. the first way is to select points by defining the range of left and right value and alternatively with or without given level number. another way is to utilize spatial functions from the database. geometry of desired extent and appropriate hierarchy level is defined and points on given hierarchy level that intersect the extent geometry are selected. the visualization of query result with available clustering is based on visualization of component levels of the created hierarchy of clusters. the level of hierarchy that will be visualized is selected according to the current extent of a map window in gis viewer. the main problem of the cluster visualization was matching hierarchy levels with bbox size of map windows. because the number of levels of the hierarchy is derived from the number of points in dataset, table 1: cluster hierarchy stored in table – example id_point id_parent level_point left_index right_index 1 4 0 3 4 ... ... ... ... ... 6 6 0 15 16 ... ... ... ... ... 6 6 1 12 21 ... ... ... ... ... 6 2 1 42 geoinformatics fce ctu 11, 2013 69 kepka, m., ježek, j.: web client for postgis it was necessary to select the matching way in addiction to spatial extent of dataset. the idea is based on matching the highest hierarchy level with maximum bbox of the result dataset. lower hierarchy levels are subsequently matched with smaller map window bbox sizes. the matching is based on the size of ratio of current map window extent to maximum extent of dataset. that means that the smaller the value of the ratio is, the lower the level of the hierarchy is selected. in standard case, the data for visualization are selected if they intersect current map window bbox. together with simplification, suitable level of the hierarchy is selected first and then the data on given level that intersect current map window bbox are selected. this method further reduces the amount of data that user application receives. the disadvantage is the necessity of creation of the hierarchy over the query results. experiments and results the new client application has been designed and then developed as a java application called postmap. the postmap has been developed according to designed structure that was described previously. basic modules three modules of postmap are shown in fig. 4. figure 4: schema of the application first step represents running the analysis where sql query is on the beginning and an identifier of the result is at the end in web client. second step stands for preview of the result where the identifier of the result and bbox of current map window are on the start of process and kml file with part of result data intersecting current map window is at the end. graphical visualization of the query results was implemented by two methods. both of implemented methods have certain advantages and disadvantages. first method is direct visualization of entire query result content in the map window of web graphical user interface (gui). second method is based on publishing results in kml files and using another gis geoinformatics fce ctu 11, 2013 70 kepka, m., ježek, j.: web client for postgis viewer as visualization client. graphical user interface the first part of postmap which user meets is a web graphical user interface (gui). the web gui is a javascript thin client in a form of web page that contains functional sections with client-server communication. the web gui enables inserting and editing of sql queries, contains list of last 5 user queries and a map window. after finishing of analysis, the content of query result is directly displayed in the map window. according to finished rendition of query result user can edit previous sql query to reach intended result. fig. 5 shows web client with opened query result. figure 5: web client with query result another simplification method was implemented to current version of postmap. geohash method is using for large results where is necessary an overview of features distribution rather than precision of features position. in cases of important position is used clustering method described above. an example of simplified result by geohash method is shown on fig. 6. the second visualization way is based on publishing query results in kml format. the query result is exported from the application as dynamic kml file with the networklink element after finishing of given query. in this case, user can open output file in any compatible gis viewer according to his needs (e.g. googleearth). prerequisite for correct dynamic visualization is supporting of kml format with networklink element in the selected gis viewer. this support is necessary for communication between the server part and the third party gis viewer. the kml file with networklink is opened in a selected gis viewer, gis viewer starts to communicate with the server application and the server application starts to send requested part of query result. the gis viewer requests part of a query result by sending the extent of current map window. the advantage of this visualization is the minimization geoinformatics fce ctu 11, 2013 71 kepka, m., ježek, j.: web client for postgis figure 6: web client with simplified query result of data amount that is being used in gis viewer at certain moment. the disadvantages are necessity of repeated transfer of small data amount over net and using of third-party gis viewers to displaying query results. an example of visualization clustered features of query results through dynamic kml file is shown in fig. 7. rest api the current version of postmap uses rest api for two tasks. the first task is the management of the user queries. there are implemented several url for inserting, updating, deleting and listing of user queries. responses of these requests depends on used http methods, but mostly are returned responses in json format or only response codes. the second task is the management of stored queries metadata. this group of urls returns description of stored user queries listed by several parameters. the main advantages of these restful services are the easier way how to add new functions to the postmap and the easier dealing with url requests. postmap can be uses as a middleware to another application by accessing services over rest api. discussion postmap and its user interface enables to perform sql queries based on the select easily. unlike current solutions there is no need for any configuration of data visualisation and results can be visualised on to a map. on other side we have identified these limitations: • the sql results have to be stored as temporal tables. update of particular result essentially means to delete and create new temporal table again. • there exists the risk of sql injection what is a security issue. such an issue can be geoinformatics fce ctu 11, 2013 72 kepka, m., ježek, j.: web client for postgis figure 7: web client with simplified query result solved by set-up of authentication and authorization layer on the level of rest services. • there is no possibility to apply cartographic visualisation of certain attribute in the form of cartograms. such a problem can be solved in the future through the collection of templates for kml style. • in case of using kml file with networklink, geometry features are loaded only according to the current bbox regardless of previously loaded data, although bboxs could be partially overlapping with the one that were loaded before. such an issue can be solved for example by storing previously published geometry features, comparison new bbox to the previous and sending only the missing geometry features. conclusion the main usage of the postmap can be found in two fields. the first field are technical projects where can be used as basic software tool for user spatial analyses and their visualization. the postmap stands for a middleware that connects database with stored data and functions with visualization module e.g. a web geoportal or another standalone application. the postmap consumes regular sql select queries that can contain spatial functions and produces. the input sql queries can be predefined and users fill up only several parameters. the outputs are represented by query results exported in form of common kml files or visualized in own web client directly. we suppose that the query result contains exactly one geometry attribute. geoinformatics fce ctu 11, 2013 73 kepka, m., ježek, j.: web client for postgis the use of the kml format enables the export of whole query result or performs the dynamic visualization. the second field of use is education of general gis subjects or subjects focused on spatial databases. postmap can be used to introduce spatial functions and predicates. it is possible to demonstrate use of functions and predicates into sql queries and to demonstrate the connection of sql queries outputs to parameters change. the contribution of this application consists in very quick and easy web visualization of query results without using any other clients. acknowledgement author michal kepka was supported by the european regional development fund (erdf), project “ntis – new technologies for the information society”, european centre of excellence, cz.1.05/1.1.00/02.0090. author jan ježek was supported by the project exliz – cz.1.07/2.3.00/30.0013, which is co-financed by the european social fund and the state budget of the czech republic. references [1] qgis user guide. available: http://www.qgis.org/en/docs/user_manual/index. html [2] geoserver documentation: user manual. geoserver [online]. release 2.4.1. 2013. available: http://docs.geoserver.org/ [3] mapserver documentation: mapserver. [online]. release 6.4.0. 2013. available: http: //mapserver.org/documentation.html [4] ogc standards. open geospatial consortium. 2013. [online]. available: http://www. opengeospatial.org/standards/is [5] georaptor documentation. georaptor [online]. release 3.0.1. 2011. available: http: //sourceforge.net/apps/mediawiki/georaptor/index.php?title=main_page [6] li, zhilin. algorithmic foundation of multi-scale spatial representation. boca raton, fl: crc press, c2007, 280 p. isbn 978-084-9390-722. [7] google inc. kml documentation. 2012. [online]. available: https://developers. google.com/kml/documentation/ [8] refractions research inc. postgis 1.5 manual. svn revision (11757). 2012. [online]. available: http://postgis.net/docs/manual-1.5/index.html [9] visigoth software society. freemarker manual. version 2.3.20. 2013. [online]. available: http://freemarker.org/ [10] fielding, roy thomas. architectural styles and the design of network-based software architectures. chapter 5: representational state transfer (rest). doctoral dissertation, university of california, irvine, 2000. available: http://www.ics.uci.edu/ ~fielding/pubs/dissertation/rest_arch_style.htm geoinformatics fce ctu 11, 2013 74 http://www.qgis.org/en/docs/user_manual/index.html http://www.qgis.org/en/docs/user_manual/index.html http://docs.geoserver.org/ http://mapserver.org/documentation.html http://mapserver.org/documentation.html http://www.opengeospatial.org/standards/is http://www.opengeospatial.org/standards/is http://sourceforge.net/apps/mediawiki/georaptor/index.php?title=main_page http://sourceforge.net/apps/mediawiki/georaptor/index.php?title=main_page https://developers.google.com/kml/documentation/ https://developers.google.com/kml/documentation/ http://postgis.net/docs/manual-1.5/index.html http://freemarker.org/ http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm kepka, m., ježek, j.: web client for postgis [11] macqueen, j. b. (1967). some methods for classification and analysis of multivariate observations. in proceedings of 5th berkeley symposium on mathematical statistics and probability vol. 1. university of california press. 1967. pp. 281–297. available: http: //projecteuclid.org/euclid.bsmsp/1200512992 [12] hartigan, j. a. and wong, m. a. algorithm as 136: a k-means clustering algorithm. journal of the royal statistical society. series c (applied statistics), vol. 28, no. 1 1979. pp. 100-108. available: http://www.jstor.org/stable/2346830 [13] harada hitoshi. kmeans 1.1.0. plugin for postgresql. release 22. 7. 2011. [online]. available: http://pgxn.org/dist/kmeans/ [14] skála, jiří. algorithms for manipulating large geometric data. ph.d. dissertation. fakulta aplikovaných věd, západočeská univerzita v plzni, plzeň. 2012. [15] marvalová, jindra. správa a vizualizace časoprostorových bodových dat. bachelor thesis. fakulta aplikovaných věd, západočeská univerzita v plzni, plzeň. 2013. [16] niemeyer, gustavo. geohash. [online]. available: http://geohash.org/ [17] gniemeyer. geohash. in: wikipedia: the free encyclopedia. [online]. san francisco (ca): wikimedia foundation, 2001[cit. 2013-09-19]. available: http://en.wikipedia. org/wiki/geohash [18] van tulder, gijs. storing hierarchical data in a database. sitepoint. 30.4.2013. [online]. available: http://www.sitepoint.com/hierarchical-data-database/ geoinformatics fce ctu 11, 2013 75 http://projecteuclid.org/euclid.bsmsp/1200512992 http://projecteuclid.org/euclid.bsmsp/1200512992 http://www.jstor.org/stable/2346830 http://pgxn.org/dist/kmeans/ http://geohash.org/ http://en.wikipedia.org/wiki/geohash http://en.wikipedia.org/wiki/geohash http://www.sitepoint.com/hierarchical-data-database/ geoinformatics fce ctu 11, 2013 76 the international exchange of students: problems and solutions at the riga technical university janis strauhmanis riga technical university, latvia strauhmanis bf.rtu.lv keywords: exchange of students, foreign languages, coursebooks summary the international exchange of students plays an important role in acquisition of new knowledge and skills. however, it was possible to start implementing such exchange programmes at the riga technical university only in 1991 after the reestablishment of the department of geodesy. currently, the riga technical university cooperates with several countries in implementing exchange programmes of students. the main problems encountered in this process are similar: inadequate foreign language skills, a lack of internationally recognized coursebooks and other study materials, insufficient cooperation between the universities that implement exchange programmes. these problems should be addressed by creating a working group and expanding cooperation, as well as enhancing requirements for students who participate in the exchange programmes. the present situation the first foreign students who came to study geodesy at the riga technical university (rtu) in 1994 were from lebanon, however, their basic field of study was civil engineering. a year later the students of rtu went to finland and sweden to study at the technical university of helsinki and the higher technical school of stockholm. from 1998 to 2004, several citizens of lebanon mastered the study programme in geodesy and cartography at the riga technical university and defended their engineering projects. the students of rtu went to study (mostly for one semester) to the following countries: � denmark: university of aalborg, technical university of denmark, � finland: mikkeli politechnic, � germany: high technical school of karlsruhe, high technical school of stuttgart, geinformatics fce ctu 2007 67 the international exchange of students: problems and solutions at the riga technical university � spain: university of valencia. the following figures characterize the exchange programme of students in geodesy and cartography: � in 1995/96 and 1996/97 four students from rtu studied abroad; � in 1998/99 and 1999/2000 four students from lebanon studied at rtu and five students of the department of geomatics studied abroad; � in 2000/01 and 2001/02 the lebanese students continued their studies at rtu and three students of the department of geomatics studied abroad; � in 2003/04 the lebanese students completed their studies at rtu and three students of the our department went to study in germany and finland. during the following year a comparatively small number of students from the department of geomatics, rtu studied abroad. in 2006/07 four our students went to finland to study at the mikkeli politechnic (finland), one student went to germany – high technical school of stuttgart and one to university of valencia (spain). 4 students from spain (university of valencia), two from germany and france studied at the department of geomatics, rtu. it should be noted that the number of foreign students studying in latvia and also at the riga technical university has recently increased: in the academic year 2006/2007 1425 students from 57 countries studied at latvian higher educational establishments, but at rtu – 76 students from 30 countries. 829 students from 25 latvian higher educational establishments are now studying abroad; 67 students of the riga technical university are now studying in 20 foreign countries. problems and solutions having been involved in the international exchange of students for more than ten years, we can identify the main problems; some of them, in our opinion, are the following: � inadequate english language skills; � a lack of internationally recognized coursebooks and other studies materials in geodesy, surveying and cartography; � some higher educational establishments that offer exchange programmes do not provide instruction in english for foreign students. in our opinion, the above-mentioned problems could be solved in the following way: � the students who want to go to study in another country should pass a foreign language test that proves their ability to master courses in their speciality in the required foreign language; � before implementing an exchange of students, the relevant universities should exchange information about the previously acquired knowledge and the required subjects, as it geinformatics fce ctu 2007 68 the international exchange of students: problems and solutions at the riga technical university was already pointed out in the plan drawn up by the fig 2nd commission and, in our opinion, it should be dealt with immediately; � it is necessary to hold a seminar at international level to discuss the issues of implementing student exchange programmes. proposals about a new coursebook in cartography were made at fig congress. it should be noted that we closely cooperate with our finnish colleagues in exchanging information, which makes it possible for us to improve the training of foreign students. the international institute for geo-information science and earth observation (itc) has great experience in training foreign students and they consider that one of the main problems in working with groups of foreign students is their different level of basic knowledge. this problem is encountered quite often. the department of geomatics of riga technical university is ready to participate in finding solutions to the above-mentioned problems. references 1. balodis j., strauhmanis j., tarasenko m. system of cartographic and gis study subjects at the department of geomatics, riga technical university. proceedings: 1st international conference on cartography & gis. borovetz, 2006. 2. bruijn de s.. knowledge economy for whom? task for europe in internationalisation of education. geo informatics, july/august, 2006., p.32 – 35. 3. fig commission 2. draft work plan 2007 – 2010. munich, 2006. 4. dasse g. enhancing the representation of under-represented groups in fig. fig publication no.35, copenhagen 2006. 5. magel h. 2005, new challenges to education in geodesy and geoinformation. geoinformatics, april/may 2005., p. 12 – 13. 6. quality assurance in surveying education. (1999) fig publication no.19. 7. strauhmanis j. geomatics education in latvia. gim international, march 2007., 8. strauhmanis j. activities of the state and private cartography in latvia. east and central europe experience in cartography of new economy and political systems.legal and organizational problems. abstracts. wroclaw-polanica zdroj, 2006. p. 56-57. 9. quality assurance in surveying education. (1999) fig publication no.19. geinformatics fce ctu 2007 69 geinformatics fce ctu 2007 70 reference data as a basis for national spatial data infrastructure tomáš mildorf and václav čada department of mathematics section of geomatics faculty of applied sciences, university of west bohemia in pilsen univerzitní 22, 306 14 pilsen, czech republic mildorf@kma.zcu.cz cada@kma.zcu.cz abstract spatial data are increasingly being used for a range of applications beyond their traditional uses. collection of such data and their update constitute a substantial part of the total costs for their maintenance. in order to ensure sustainable development in the area of geographic information systems, efficient data custody and coordination mechanisms for data sharing must be put in place. this paper shows the importance of reference data as a basis for national spatial data infrastructure that serves as a platform for decision making processes in society. there are several european initiatives supporting the wider use of spatial data. an example is the inspire directive. its principles and the main world trends in data integration pave the way to successful sdi driven by stakeholders and coordinated by national mapping agencies. keywords: reference data, inspire, spatial data infrastructure, data integration 1. introduction the role of spatial data in current society is increasingly important. spatial data help us to shape the environment we live in, to manage the resources we possess and to preserve our cultural heritage. the importance of spatial data is being recognised by decision makers, whose support is essential for further development of spatial information technologies and the wider use of spatial data in practice. funding schemes at various levels of administration aim to support projects and initiatives dealing with access to heterogeneous spatial data through innovative technologies. the potential of spatial data is also given by the economic value of spatial information within public sector information in the eu. an analysis dated back to 1999 (see figure 1) is underpinned by recent studies including acil tasman (2008) and fornefeld et al. (2009). the range of applications where spatial data play an important role is growing alongside the demand for sustainable spatial data management. despite the importance of spatial data for society, there are certain questions which need to be addressed in order to achieve sustainable management and efficient use of spatial data. the geographic information panel (2008) declares that “current users of spatial information spend 80 per cent of their time collating and managing the information and only 20 per cent analysing it to solve problems and generate benefits.” how can we overcome this imbalance? recent activities by the european commission provided a european interoperability framework (eif) aiming to support the interoperability of european public sector information and geoinformatics fce ctu 9, 2012 51 mildorf, t., čada, v.: reference data for spatial data infrastructure figure 1: economic value of public sector information in the european union in 1999 (pira international ltd. & university of east anglia and knowledgeview ltd. 2000). related services, taking into account legal, organisational, semantic and technical issues. one of the most important projects with a focus on harmonisation of spatial data and services is the infrastructure for spatial information in the european community (inspire) directive. inspire entered into force in 2007 as a european directive and over the next two years was transposed into the national legislation of all member states of the european union. the main objective of the inspire directive is to establish an infrastructure for spatial information in europe to “assist policy-making in relation to policies and activities that may have direct or indirect impact on the environment.” (european parliament 2007). is fulfilling the inspire directive sufficient to secure the sustainability of spatial data infrastructures (sdi) on a national level? is there anything that national mapping agencies must beware of? the spatial data analyses that are needed for decision making processes in society require spatial data of appropriate quality in terms of completeness, logical consistency and positional, temporal and thematic accuracy. one aspect to which insufficient attention is paid in sdi building, and which is considered by the authors as a basis for national sdi, is the delimitation of reference data. what is understood by the authors under the term reference data? what benefits for decision making processes and the sustainability of national sdis do they present? in order to address the above mentioned questions the authors analysed selected data sources in the czech republic within the context of the inspire directive. the next chapter reviews the scope of inspire and its main principles. chapter 3 presents the results of the analysis of the selected data sources from the czech republic. chapter 4 describes the role of reference data within the context of an sdi. the need for reference data is underlined by global trends in data integration and cohesion of cadastral and topographic data in sdi building in chapter 5. cases from the netherlands and great britain give the context for the issues tackled in this paper and provide best practice in the national sdi implementation with regard to reference data and maintenance of the inspire principles. the paper aims to start a discussion about these topics. geoinformatics fce ctu 9, 2012 52 mildorf, t., čada, v.: reference data for spatial data infrastructure 2. the scope of inspire the inspire directive lays down the rules that enable the sharing and reuse of pre-existing data. heterogeneous spatial data originating from various sources are harmonised according to the common inspire data specifications. a single access point enables users to search the right data for their purposes, to seamlessly view the data and to download them or to perform other spatial services. inspire is a good basis not only for decision makers but also for planners, businesses, emergency management and others. the success of inspire is based on principles that are crucial for achieving the sustainability of the infrastructure. the inspire principles include: • data should be collected once and maintained at the level where this can be done most effectively; • it should be possible to combine seamlessly spatial data from different sources and share them between many users and applications; • spatial data should be collected at one level of government and shared between all levels; • spatial data needed for good governance should be available on conditions that are not restricting their extensive use; • it should be easy to discover which spatial data are available, to evaluate their fitness for purpose and to know which conditions apply for their use. (inspire website 2012) all of these principles can be achieved by implementing the inspire mechanisms for data sharing. however, in some cases this is only true to a certain degree. the heterogeneity of spatial data harmonised by inspire can cause certain inconsistencies in the target data, for example when two datasets of different levels of detail are harmonised (see figure 2). the data provided through the inspire infrastructure for applications requiring data of a high level of detail and high accuracy may not be sufficient. the legislation put in place by the inspire directive does not affect the collection and processing of data which are considered by the authors as the main sources of inconsistencies. the primary scope of inspire is therefore on data at european, national and regional level where the inconsistencies are diminished by the level of generalisation and the expected level of quality. figure 2: inconsistency of harmonised datasets of different levels of detail. geoinformatics fce ctu 9, 2012 53 mildorf, t., čada, v.: reference data for spatial data infrastructure inspire represents a solid foundation for the european sdi. national sdis should benefit from the provisions of inspire. it should be the responsibility of national mapping agencies (nmas) to combine the national requirements and priorities with inspire and to secure the sustainability of the overall infrastructure. nmas should take advantage of the inspire implementation and trigger the creation of national sdi. 3. analysis of selected data sources in the czech republic the authors analysed the situation of spatial data management in public administration in the czech republic in relation to the inspire directive and its principles. the focus was mainly on semantic aspects of selected digital data sources of higher level of detail including: • cadastral map (km); • technical maps (tmo); • planning analytical materials (uap); • fundamental base of geographic data (zabaged). all the geographic features from these data sources were compared in terms of their definition and an overview of relations between the features was drawn. due to complexity of the overview including the definitions of all the features only an indicative table showing the similarities between the analysed data sources is presented (see table 1). the results of this analysis show that many geographic features from the selected data sources are duplicated, they are not maintained at the most appropriate level and it is not easy to combine them with other data sources. three of five inspire principles are not maintained and the sustainability of the data sources forming the national sdi is questionable. 4. reference data data collection and their update make a substantial part of the total costs of data maintenance. sharing of spatial data between different applications enables sharing of the costs for data management. the historical development of spatial data of public administration in many countries, and the lack of coordination between data producers and data users, has led to duplication in data collection. an example is the situation in the czech republic analysed by the authors. real world phenomena are independently captured, processed, stored and updated by several organisations. based on the findings of the performed analysis, the authors propose the delimitation of reference data on the highest level of detail that will be shared between several applications of public administration and other users. great achievement was done by establishing the basic registers in the czech republic including the register of territorial identification, addresses and real estates (ruian). geographic features included in ruian thus create a reference base for all applications of public administration and other users. the system of basic registers uses the term reference data as data maintained in basic registers and given by law that are up-to-date, valid and unambiguous for every application of public administration. geoinformatics fce ctu 9, 2012 54 mildorf, t., čada, v.: reference data for spatial data infrastructure dkm zabaged tmo uap dálnice včetně ochranného pásma rychlostní silnice včetně ochranného  pásma silnice i. třídy včetně ochranného  pásma silnice ii. třídy včetně ochranného  pásma silnice iii. třídy včetně ochranného  pásma železniční dráha celostátní včetně  ochranného pásma dráhy celostátní vybudované pro  rychlost větší než 160 km/h včetně  ochranného pásma lanová dráha, lyžařský vlek lanová dráha včetně ochranného  pásma stožár lanové dráhy nadzemní vedení vysokého a velmi  vysokého napětí včetně stožárů elektrické vedení elektrické vedení nadzemní a podzemní vedení  elektrizační soustavy včetně  ochranného pásma stavba k využití vodní energie (vodní  elektrárna) elektrárna elektrárna, spínací stanice nebo měnírna,  transformovna, transformační stanice výrobna elektřiny včetně ochranného  pásma dálkový produktovod potrubí produktovodu produktovod včetně ochranného  pásma vodojem věžový vodojem ostatní plocha ‐ skládka skládka skládka včetně ochranného pásma stavba odkaliště usazovací nádrž, odkaliště odkalovací nádrž, kaliště odval, výsypka, odkaliště, halda vodní plocha ‐ rybník vodní plocha ‐ vodní nádrž přírodní vodní nádrž vodní plocha ‐ vodní nádrž umělá vodní plocha ‐ koryto vodního toku  přirozené nebo upravené vodní tok vodní tok vodní plocha ‐ koryto vodního toku  umělé břehová čára vodní tok občasný, vysychající, odpadová  stoka, suchý příkop vodní plocha ‐ zamokřená plocha bažina, močál močál národní park ‐ i. zóna národní park ‐ ii. zóna národní park ‐ iii. zóna ochranné pásmo národního parku chráněná krajinná oblast ‐ i. zóna chráněná krajinná oblast ‐ ii.‐iv. zóna hranice ochranného pásma pam. rezervace ‐ budova, pozemek v  památkové rezervaci hranice památkové rezervace památková rezervace včetně  ochranného pásma pam. zóna ‐ budova, pozemek v  památkové zóně hranice památkové zóny památková zóna včetně ochranného  pásma chr.lož.území,dob.prostor,chr.území pro  zvl.zásahy do z.kůry hranice chráněného ložiskového území chráněné ložiskové území vodní plocha národní park včetně zón a  ochranného pásmavelkoplošné zvláště chráněné  území osa železničních a tramvajových kolejí silnice, dálnice osa kolejí železniční tratě mimo  železniční stanici a průmyslové závody hrana koruny a střední dělicí pás silnice  nebo dálnice železniční trať lanová dráha, dopravník hranice zvláště chráněného území chráněná krajinná oblast včetně zón vodní útvar povrchových,  podzemních vod table 1: the comparison of geographic features from the selected data sources. reference data were used as the basis for documents forming the current inspire directive. the chapter on reference data of the european territorial management information infrastructure (etemii) white paper (2001) defined the following functional requirements for reference data: • to provide an unambiguous location for a user’s information; • to enable the merging of data from various sources; • to provide a context to allow others to better understand the information that is being presented. (p. 5) the inspire twg cadastral parcels (2009) defines reference data as data that constitute the spatial frame for linking and pointing at other information that belongs to specific thematic fields; e.g. land use, land cover, agriculture and demography. these are considered as application data, which is a complementary term to reference data (inspire drafting geoinformatics fce ctu 9, 2012 55 mildorf, t., čada, v.: reference data for spatial data infrastructure team data specifications, 2008). reference data provide a common link between various applications and provide mechanisms for sharing information in society. in the initial phase of the inspire development the following spatial data themes were defined as reference data (rdm working group 2002): • geodetic reference system; • units of administration; • units of property rights; • addresses; • selected topographic themes; • orthoimagery; • geographical names. (p. 11) it is necessary to note that these are spatial data themes covering a wide range of geographic features. the vision of the authors goes a step beyond the spatial data themes into a selection of particular geographic features that fulfil the above mentioned functional requirements for reference data. the selection of geographic features playing the role of reference data should be available for any application. the consensus on the features, their quality and the sources of updates as well as the forms of exchange should be agreed. the next chapter introduces examples from great britain, the netherlands and other countries to give context to the above mentioned ideas, especially in relation to reference data and their integration with application data. 5. trends in data integration 5.1. great britain great britain realised the importance of inspire in time and in 2008 the uk location programme1 was created. the programme combines the national priorities (implementation of the uk location strategy) with the requirements of inspire and provides a complex solution for the sharing and reuse of spatial information of public administration. one of the operational challenges of the uk location programme is, according to the geographic information panel (2008), to develop a set of core reference geographies captured at the highest level of detail. in the initial stage, the core reference geographies should include a geodetic framework, topographic mapping (at different resolutions and including ground height information), geographic names, addresses, streets, land and property ownership, hydrology/hydrography, statistical boundaries and administrative boundaries. one of the main building blocks of the uk location programme is the digital national framework2 (dnf) that includes the interoperability components such as feature catalogue, terminology, metadata, standards and reference model. in relation to reference data, the feature catalogue of the dnf base reference objects is of most importance and provides an 1 http://location.defra.gov.uk/programme/ 2 http://www.dnf.org/ geoinformatics fce ctu 9, 2012 56 mildorf, t., čada, v.: reference data for spatial data infrastructure agreed definition and a set of attributes for every geographic feature. the feature catalogue is not definite and can be further extended. the basis for the current feature catalogues represents the feature catalogue of the ordnance survey (os) mastermap. os mastermap is a common reference base containing a variety of information in four different product layers: address layer, imagery layer, integrated transport network layer and topography layer. the os mastermap database contains over 450 million geographic features. “every feature within the os mastermap database has a unique common reference (a toid®) which enables the layers to be used together, including the layer of your own information.“ (ordnance survey 2012). the example of the os mastermap topography layer is depicted in figure 3. figure 3: os master map – topography layer (ordnance survey 2012) the use of the common reference data aims mainly to improve interoperability, data harmonisation and spatial data quality and increase cross-sector collaboration. the end benefits of reference data encompass (jones & wilks 2012): • reduction of costs for public, private and 3rd sector users of data; • improvement of quality, efficiency and delivery of services; • improvement of evidence base for informed policy development and decision making; • increase of research, innovation and commercial exploitation of location data to benefit uk economy; • facilitation of other government initiatives using location based information and tools. geoinformatics fce ctu 9, 2012 57 mildorf, t., čada, v.: reference data for spatial data infrastructure 5.2. the netherlands the strategic features of the national sdi in the netherlands are the key registers (basisregistratie) that have been developing since 2008. the system of key registers is coordinated by the ministry of the interior and kingdom relations. there are 13 key registers planned. 4 of them related to spatial information and maintained by kadaster, the dutch land registry office, include: • key register of topography (basisregistratie topografie, brt) – contains the topographic dataset topo10nl in the level of detail equivalent to scale 1:10 000; • key register of cadastre (basisregistratie kadaster, brk); • key register of large-scale topography (basisregistratie grootschalige topografie, bgt); • key register of addresses and inhabitants (basisregistraties adressen en gebouwen, bag). the key registers provide reference data for applications of public administrations as well as for private sector. the use of common reference data is obligatory for all public administrations. the inclusion of the large-scale topography as one of the key registers enables linking nonspatial and spatial information of public administration. non-spatial data can be analysed in the context of spatial data, e.g. visualisation of average income or violence in certain areas (peersmann et al. 2009). the current and potential uses of this key register are depicted in figure 4. figure 4: current and potential uses of bgt (adapted from peersmann et al., 2009). an important feature of the dutch sdi is that cadastral information is maintained together with the large-scale topography in one database. according to steudler et al. (2009), the layer of buildings is shared between these datasets and cadastral boundaries are aligned to the topographic features. geoinformatics fce ctu 9, 2012 58 mildorf, t., čada, v.: reference data for spatial data infrastructure 5.3. other examples during the international conference ‘spatial information for sustainable development’ in nairobi, ryttersgaard (2001) presented experiences from sdi implementation. one of his visions for further development in this area stated: “cadastral, topographic and thematic datasets should adopt the same overarching philosophy and data model to achieve multipurpose data integration, both vertically and horizontally.” (ryttersgaard 2001, p.7). one of the main priorities of the working group 2 of the permanent committee on gis infrastructure for asia and the pacific is the integration of datasets containing representation of man-made objects and natural objects. this is performed mainly by integrating cadastral and topographical data which are in most cases maintained separately with no existing links between them and which hinder further exploitation (rajabifard & williamson 2006). the national land survey of finland started with providing free access to reference data since may 2012 including topographic maps, imagery and digital elevation model. the publication aims to enable combination of application data from various sectors with common reference data (ratia 2012). according to koski (2011), economic growth should be stimulated. the interoperability of spatial data in germany is coordinated by the working committee of the surveying authorities of the german länder (adv). the key feature of the interoperability is the aaa concept for modelling of spatial information based on iso and ogc standards (working committee of the surveying authorities of the german länder 2011). aaa stands for afis-alkis-atkis which represent the geodetic control, cadastral and topographical systems of germany. 6. conclusions the requirements of the users at a local level go beyond the scope of inspire, especially in terms of data quality. national sdis supported by nmas should serve as a basis for inspire. based on the analysis of the situation in the czech republic, we can state that pure implementation of the inspire mechanisms for data sharing without introducing the national context is not sufficient for sustainable national sdi building. nmas should take the responsibility for coordination of the legal, organisational, technical and semantic aspects; meeting the national priorities and user requirements at local, regional and national level; and supporting the implementation of inspire in connection with a national sdi. the building of a national sdi should use the experience of already existing infrastructures and best practices in data integration. several examples were mentioned in the paper in order to provide the underlying information for the authors’ suggested approach for designing and implementing the national sdi in the czech republic. the trends in data integration aim to combine cadastral and topographic data and to provide a reference data for various applications. the inspire principles are taken as key priorities in sdi building. based on the performed analyses and long-term research activities, the authors are proposing to define reference data at the highest level of detail, so that they can be shared between various organisations of public administration and the private sector. the delimitation of reference data is expected on the level of geographic features. the understanding of the semantics of available data, together with organisational and legal aspects, is geoinformatics fce ctu 9, 2012 59 mildorf, t., čada, v.: reference data for spatial data infrastructure essential for further progress in national sdi building. the czech nma should coordinate this process in connection with lead experts from the field of geomatics and geoinformatics. a good progress in this matter started with the establishment of the register of territorial identification, addresses and real estates (ruian). however, wider scope and inclusion of other data sources is necessary for better exploitation of spatial data available in public administration and for securing the sustainability of data management. the definition of reference data mainly represents, but is not limited to, the following benefits: • sustainable development of the national sdi and support of its extensive use; • saving costs for data collection and data update of public administration; • source of guaranteed high quality data; • unique opportunity for integration of cadastral and topographic information; • possibility of integration with application data; • general usability and data availability. as already mentioned in the introduction, the aim of the paper is to raise a discussion about these topics. what is your opinion? references [1] acil tasman, 2008. the value of spatial information, available at: http://www. crcsi.com.au/documents/aciltasmanreport_full.aspx [accessed may 31, 2012]. [2] inspire website, 2012. available at: http://inspire.ec.europa.eu/. [3] european parliament, 2007. directive 2007/2/ec of the european parliament and of the council of 14 march 2007 establishing an infrastructure for spatial information in the european community (inspire). available at: http:// eurlex.europa.eu/johtml.do?uri=oj:l:2007:108:som:en:html [accessed may 31, 2012]. [4] european territorial management information infrastructure, 2001. etemii white paper: chapter on reference data. available at: http://www.ec-gis.org/etemii/ reports/chapter1.pdf [accessed march 30, 2012]. [5] fornefeld, m. et al., 2009. assessment of the re-use of public sector information (psi) in the geographical information, meteorological information and legal information sectors, düsseldorf, germany: micus management consulting gmbh. [6] geographic information panel, 2008. place matters: the location strategy for the united kingdom, great britain. [7] inspire drafting team data specifications, 2008. d2.5: generic conceptual model, version 3.0, available at: http://inspire.jrc.ec.europa.eu/reports/ implementingrules/dataspecifications/d2.5_v3.0.pdf [accessed may 31, 2012]. [8] inspire twg cadastral parcels, 2009. d2.8.i.6 inspire data specification on cadastral parcels – guidelines, v3.0. geoinformatics fce ctu 9, 2012 60 http://www.crcsi.com.au/documents/aciltasmanreport_full.aspx http://www.crcsi.com.au/documents/aciltasmanreport_full.aspx http://inspire.ec.europa.eu/ http://eurlex.europa.eu/johtml.do?uri=oj:l:2007:108:som:en:html http://eurlex.europa.eu/johtml.do?uri=oj:l:2007:108:som:en:html http://www.ec-gis.org/etemii/reports/chapter1.pdf http://www.ec-gis.org/etemii/reports/chapter1.pdf http://inspire.jrc.ec.europa.eu/reports/implementingrules/dataspecifications/d2.5_v3.0.pdf http://inspire.jrc.ec.europa.eu/reports/implementingrules/dataspecifications/d2.5_v3.0.pdf mildorf, t., čada, v.: reference data for spatial data infrastructure [9] jones, g. & wilks, p., 2012. uk location programme, benefits realisation strategy. available at: http://data.gov.uk/sites/default/files/benefits%20realisation% 20strategy%20v2.0%20final.pdf. [10] koski, h., 2011. does marginal cost pricing of public sector information spur firm growth? [11] ordnance survey, 2012. os mastermap definitive geographical information of britain. available at: http://www.ordnancesurvey.co.uk/oswebsite/products/ os-mastermap/index.html [accessed april 17, 2012]. [12] peersmann, m., eekelen, h. & meijer, m., 2009. the large scale topographic base map of the netherlands (gbkn): the transition from a public-private partnership (ppp) to a legally mandated key registry (bgt). in gsdi world conference. rotterdam, the netherlands. available at: www.gsdi.org/gsdiconf/gsdi11/papers/pdf/267.pdf [accessed march 21, 2012]. [13] pira international ltd. & university of east anglia and knowledgeview ltd., 2000. commercial exploitation of europe’s public sector information, luxembourg: office for official publications of the european communities. available at: http://www.ec-gis. org/docs/f15363/pira.pdf [accessed august 3, 2011]. [14] rajabifard, a. & williamson, i., 2006. integration of built and natural environmental datasets within national sdi initiatives. in seventeenth united nations regional cartographic conference for asia and the pacific. bangkok, thailand: united nations. [15] ratia, j., 2012. sdi interviews jarmo ratia of national land survey of finland on open data. available at: http://www.sdimag.com/20120302584/ sdi-interviews-jarmo-ratia-of-national-land-survey-of-finland-on-open-data. html [accessed may 31, 2012]. [16] rdm working group, 2002. reference data and metadata position paper. available at: http://inspire.jrc.ec.europa.eu/reports/position_papers/inspire_ rdm_pp_v4_3_en.pdf [accessed may 31, 2012]. [17] ryttersgaard, j., 2001. spatial data infrastructure, experiences and visions. [18] steudler, d. et al., 2009. cadastral template, a worldwide comparison of cadastral systems. cadastral template, a worldwide comparison of cadastral systems. available at: http://www.fig.net/cadastraltemplate/index.htm [accessed may 31, 2012]. [19] working committee of the surveying authorities of the german länder, 2011. national report 2010/2011, working committee of the surveying authorities of the german länder. available at: http://www.adv-online.de [accessed may 31, 2012]. geoinformatics fce ctu 9, 2012 61 http://data.gov.uk/sites/default/files/benefits%20realisation%20strategy%20v2.0%20final.pdf http://data.gov.uk/sites/default/files/benefits%20realisation%20strategy%20v2.0%20final.pdf http://www.ordnancesurvey.co.uk/oswebsite/products/os-mastermap/index.html http://www.ordnancesurvey.co.uk/oswebsite/products/os-mastermap/index.html http://www.ec-gis.org/docs/f15363/pira.pdf http://www.ec-gis.org/docs/f15363/pira.pdf http://www.sdimag.com/20120302584/sdi-interviews-jarmo-ratia-of-national-land-survey-of-finland-on-open-data.html http://www.sdimag.com/20120302584/sdi-interviews-jarmo-ratia-of-national-land-survey-of-finland-on-open-data.html http://www.sdimag.com/20120302584/sdi-interviews-jarmo-ratia-of-national-land-survey-of-finland-on-open-data.html http://inspire.jrc.ec.europa.eu/reports/position_papers/inspire_rdm_pp_v4_3_en.pdf http://inspire.jrc.ec.europa.eu/reports/position_papers/inspire_rdm_pp_v4_3_en.pdf http://www.fig.net/cadastraltemplate/index.htm http://www.adv-online.de geoinformatics fce ctu 9, 2012 62 klet observatory – european contribution to detecting and tracking of near earth objects milos tichy klet observatory, zatkovo nabrezi 4, cz-370 01 ceske budejovice, south bohemia, czech republic email: mtichy@klet.cz keywords: astrometry, near earth objects, telescope abstract near earth object (neo) research is an expanding field of astronomy. is is important for solar system science and also for protecting human society from asteroid and comet hazard. a near-earth object (neo) can be defined as an asteroid or comet that has a possibility of making an approach to the earth, or possibly even collide with it. the discovery rate of current neo surveys reflects progressive improvement in a number of technical areas. an integral part of neo discovery is astrometric follow-up fundamental for precise orbit computation and for the reasonable judging of future close encounters with the earth including possible impact solutions. a wide international cooperation is fundamental for neo research. the klet observatory (south bohemia, czech republic) is aimed especially at the confirmation, early follow-up, long-arc follow-up and recovery of near earth objects. it ranks among the world´s most prolific professional neo follow-up programmes. the first neo follow-up programme started at klet in 1993 using 0.57-reflector equipped with a small ccd camera. a fundamental upgrade was made in 2002 when the 1.06-m klenot telescope was put into regular operation. the klenot telescope is the largest telescope in europe used exclusively for observations of minor planets (asteroids) and comets and full observing time is dedicated to the klenot team. equipment, technology, software, observing strategy and results of both the klet observatory neo project between 1993-2010 and the first phase of the klenot project from march 2002 to september 2008 are presented. they consist of thousands of precise astrometric measurements of near earth objects and also three newly discovered near earth asteroids. klet observatory neo activities as well as our future plans fully reflect international strategies and cooperation in the field of neo studies. introduction neo research is an expanding field of astronomy, important both for solar system science and for protecting human society from the asteroid and comet hazard. neos are sources of impact risk and represent usually a low-probability but potentially a very high-consequence natural hazard. studies of neos contributed significantly to our overall understanding of the solar system, its origin and evolution. on the earth surface there is possible to find impact craters. impact craters are geologic structures formed when an asteroid or comet colide with the earth. all bodies in the solar system have been heavily bombarded by meteoroids, or geoinformatics fce ctu 2011 107 tichy m.: klet observatory – european contribution to detecting and tracking of near earth objects asteroids, throughout their history. the surfaces of the moon, mars and mercury, where other geologic processes stopped hundreds millions of years ago, record this bombardment clearly. on the earth, however, which has been even more heavily impacted than the moon, craters are continually erased by erosion and redeposition as well as by volcanic resurfacing and tectonic activity. thus only about 120 terrestrial impact craters have been recognized, the majority in geologically stable cratons of north america, europe and australia where most exploration has taken place. we know also impacts on planets in the solar system in recent years (jupiter on july 1994, impact with comet shoemaker-levy 9), and moreover impact on the earth (asteroid 2008 tc3, diameter of 3 meters, collide with the earth on october 7, 2008). therefore is necessary to study parent bodies of possible impact risk [10]. consequently, interest in detecting, tracking, cataloguing and the physical characterizing of these bodies has continued to grow. the discovery rate of current neo surveys refects progressive improvement in a number of technical areas. near earth objects are asteroids and comets with perihelion distance q less than 1.3 au. the vast majority of neos are asteroids, referred to as near-earth asteroids (neas). neas are divided into four groups (amors, apollos, atens and ieos) according to their perihelion distance, aphelion distance and their semi-major axes. there are currently more than 6000 known neas. potentially hazardous asteroids (phas) are neas whose minimum orbit intersection distance (moid) with the earth is 0.05 au or less and whose absolute magnitude h is h = 22.0 mag. or brighter i. e., whose estimated diameter exceeds about 140 meters. there are currently more than 1000 known phas. virtual impactors (vis) are asteroids for which possible impact solutions, compatible with the existing observations are known, and a very small, but definitely non-zero, probability of collision exists. virtual impactors are listed on the sentry impact risk page hosted by the nasa/jpl [1] and the neodys risk page maintained by the university of pisa which operates the impact risk monitoring system clomon2 [5]. as new positional observations become available, the most likely outcome is that the object´s orbit is improved, uncertainties are reduced, impact solutions are ruled out for the next 100 years and the object will eventually be removed from the lists. a wide international cooperation and on-line sharing of permanently updated data in the framework of neo observations is fundamental. the vast majority of permanently-updated astrometric data and orbits of asteroids and comets including near earth objects goes trough the minor planet center of the international astronomical union (harvard center for astrophysics, cambridge, ma, usa) [11], as the international clearinghouse responsible for the efficient collection, computation, checking and dissemination of these data. the first step of understanding neo population is discovery, both near earth asteroids and near earth comets. the vast majority of neo discoveries have been made by wide field telescopic surveys. the most prolific surveys are catalina sky survey, linear, loneos, neat and spacewatch, all of them supported by nasa. an integral part of neo discovery is follow-up astrometry that is required so the orbits of newly discovered objects become secure. extensive follow-up on timescales of weeks and months is usually required for subsequent return recoveries, and may become critical in ensuring that phas and vis are not lost. calculation of precise orbits and determination of impact probabilities require enough precise astrometric measurements covering appropriate orbit arcs. astrometric follow-up is essential also for targeoinformatics fce ctu 2011 108 tichy m.: klet observatory – european contribution to detecting and tracking of near earth objects gets of future radar observations, space mission targets and other observing campaigns. for brighter objects many amateur volunteer observers use small or moderate-size telescopes all over the world so as to help accomplish this task. for fainter objects just several professional telescopes of 1-m class or larger are used regularly for astrometric follow-up, mainly the jpl table mountain 0.6-m, mt. john 0.64-m, klet observatory klenot 1.06-m, spacewatch 1.8-m, and mt. lemmon 1.5-m [2]. in recent years this work is done also with the magdalena ridge observatory 2.4-m and the astronomical research institute near charleston 0.81-m telescopes. considering a geographical distribution, the majority of them is siuated in the u.s. a good number of european follow-up observations are provided by the community of amateur observers (great shefford is most active of them). in addition to the klet observatory, just calar alto 1.23-m telescope serves for neo follow-up in europe on occasions. all measurements of astrometric coordinates on the sky (right ascension and declination) are calculated as topocentric although the calculations of orbits are made geocentric. as regards close approaches to the earth and reliable orbits computation there is necessary to have precise earth coordinates (longitude, latitude and elevation above see level in meters). klenot project the klet observatory neo follow-up programme was started in 1993 using 0.57-m reflector equipped with sbig ccd cameras. it was the second world most prolific neo astrometric programme from 1998 to 2001. considering the urgent need for astrometric follow-up of fainter and fast-moving objects as well as our experience we decided to use resources of the klet observatory (south bohemia, czech republic) for building a 1-meter size telescope for such purposes. this klenot telescope was put into operation in 2002 [6]. we report the results obtained during six years of regular klenot operation as well as future plans based on technical improvement of the klenot system and also inspired by the planned next generation surveys. klenot project goals the klenot project is a project of the klet observatory near earth and other unusual objects observations team (and telescope). our observing strategy is to concentrate particularly on fainter objects, up to a limiting magnitude of m(v)=22.0mag. reasonable object selection is a key part of the observation planning process. therefore the main goals of the klenot project have been selected as: confirmatory observations of newly discovered fainter neo candidates the majority of newly discovered objects which, on the basis of their motion or orbit, appear to be neos as well as objects that are suspected to be comets go on promptly to the neo confirmation web page (neocp) maintained by the minor planet center (mpc) [3]. such neo candidates need rapid astrometric follow-up to confirm both their real existence as a solar system body and the proximity of their orbits to that of the earth. some of new search facilities produce discoveries fainter than m(v) = 20.0mag. which need a larger telescope for confirmation and early follow-up. a 1-m class telescope is also very suitable for confirmation of very fast moving objects and our larger field of view enables to search for neo candidates having a larger ephemeris uncertainty. geoinformatics fce ctu 2011 109 tichy m.: klet observatory – european contribution to detecting and tracking of near earth objects follow-up astrometry of poorly observed neos newly discovered neos need astrometric data obtained over a longer arc during the discovery opposition when they get fainter. the highest priority has been given to virtual impactors and phas. special attention is also given to targets of future space missions or radar observations. it is necessary to find and use an optimal observing strategy to maximize orbit improvement of each asteroid along with efficient use of observing time, because reasonable object selection is a key part of the observation planning process. recoveries of neos at the second opposition for the determination of reliable orbits it is required to observe asteroids at more than one opposition. if the observed arc in a discovery apparition is long enough, the chance for a recovery at the next apparition is good. if the observed arc at a single opposition is not sufficient and the ephemeris of selected target is uncertain, then we usually plan to search along the line of variation based on data from minor planet center databases (marsden, williams, spahr), lowell observatory databases (bowell, koehn), and klet observatory databases (tichy, kocer). for this purpose a larger field of view is an advantage. analysis of cometary features the majority of new ground-based discoveries of comets comes from large surveys devoted, predominantly, to near earth asteroids. the first step in distinguishing these newly discovered members of the population of cometary bodies consists of confirmatory astrometric observations along with detection and analysis of their cometary features [8]. timely recognition of a new comet can help in planning future observing campaigns. the following step is to pursue the behavior of cometary bodies i.e. to obtain observation data of comet outbursts and fragmentation or splitting of cometary nuclei. search for new asteroids the primary goal of the klenot project is astrometric follow-up of neos and comets. moreover, all of obtained ccd images are processed not just for targeted objects, but also examined visually for possible unknown moving objects. this can be achieved because the effective field of view, observing time and limiting magnitude of m(v) = 22mag. of the klenot telescope enable us to find new objects. the obtained ccd images are processed with special attention to objects showing unusual motion. klenot telescope the klenot telescope is located at the klet observatory in the czech republic in central europe. the iau/mpc observatory code is (246) klet observatory-klenot. the geographical position of the observatory is longitude = +14°17’17" e, latitude = +48°51’48"n, h = 1068 m above sea level. it is situated at a rather dark site in the middle of a protected landscape area blansky forrest (blansky les). geoinformatics fce ctu 2011 110 tichy m.: klet observatory – european contribution to detecting and tracking of near earth objects the klenot telescope was built between 1996-2002 using an existing dome and infrastructure of the klet observatory. an original mount dating from 1960s was upgraded. a new control and computer room was built on the ground floor of the dome. the klenot telescope was completed using a 1.06-m primary mirror and a primary focus corrector. the main mirror was fabricated by carl zeiss jena using sital glass (zerodur type) and is f/3.0 . the primary focus corrector was designed by sincon, turnov, czech republic, and was fabricated by the optical facility of charles university, prague, czech republic, led by jindrich walter. the corrector consists of four spherical lens elements. the resulting optical configuration is f/2.7 folded prime focus where the ccd camera is located. the ccd camera used for the klenot telescope is photometrics series 300. the ccd chip sensor site si003b contains 1024 x 1024 pixels, pixel size 24 microns, and is back illuminated with high quantum efficiency, q.e. > 80 % in range 5500-8000 angstroems. imaging array size is 24.6 x 24.6 millimeters. the ccd camera includes a 16-bit digitizer with full-frame readout time of 5.4 seconds and liquid nitrogen cryogenic cooling. cryogen hold time for our 1.1 liter dewar is over 6 hours. dark current is virtually non-existent in this camera due to operating chip temperature of 183 k. the field of view of the klenot telescope is 33 x 33 arcminutes using the ccd camera mentioned above. the image scale is 1.9 arcseconds per pixel. the limiting magnitude is m(v) =21.5 mag. for 120-sec exposure time in standard weather conditions. the klenot telescope has been the largest telescope in europe used exclusively for observations of minor planets and comets up to now. all the observing time is dedicated to the klenot team. klet software package there has been developed a special software package for klenot at klet. the package combines observation planning, data-acquisition, camera control and data processing tools running on windows and unix platforms (recently we have been using freebsd-amd64). the system uses client-server architecture where appropriate and most of the software is associated with a sql database. the sql database stores orbital elements and other information on minor planets updated on daily basis from text-based databases; the mpc orbit database (mpcorb), maintained by the minor planet center, and from the asteroid orbital elements database (astorb), created and maintained by e. bowell at the lowell observatory. the asteroids listed in the spaceguard system the priority list and objects listed as a virtual impactors by sentry (jpl) or by clomon (neodys) are flagged in the database as well for more convenient search. in addition, the database holds orbital elements and other useful data on all solar system objects discovered at klet (database k_klet) and also information on comets (database comets) created and updated from more sources by klet. the database also contains positions, times and observed objects on all of the processed photographic plates and ccd images. all data are stored locally in the local network of the observatory, so the system works also in off-line mode; i.e. in cases when on-line services are not available. beside regular updates it is possible to trigger updates for all locally stored data from external sources at any time. geoinformatics fce ctu 2011 111 tichy m.: klet observatory – european contribution to detecting and tracking of near earth objects our observers are using a web-based tool called ’ephem’ for observation planning. the tool allows the user to get an ephemeris for one minor planet and/or for known minor planets in specified field in the sky at given time. the objects in the output list can be reduced to objects of given magnitude and/or type; i.e. to neas, phas, tnos, virtual impactors, klet discoveries, critical list objects, unusual or distant minor planets, trojans, spaceguard priority list objects and comets. besides the designation, position in the sky, magnitude, and other usual ephemeris data, the output list also includes information on object type, ephemeris uncertainty, date of last observation, length of orbital arc used in orbit computation. another tool, also used in observation planning, is program ’klac’ – klenot atlas coeli. this gui program shows stars and solar system objects with a line showing their daily motion across the sky in a selected region in the sky. the size of the region usually corresponds to the fov of the telescope used so it is also used to check the telescope position during an observation. the usno-b1.0, usno-a2.0 and gsc star catalogues can be used within klac as a source of positions, magnitude estimates and proper motion of stars. we use v++ for ccd camera control. the v++ is a precision digital imaging system developed by digital optics, a standard program for photometrics ccd cameras on the win32 platforms. for exposure control and data-acquisition a set of scripts in vpascal (buildin programming language in v++) has been written. the scripts store a sequence of several ccd frames (images) in one file in tiff format. in the header of the sequence file information about the number of frames in the sequence, time, exposure time, equipment used and other information is included. programs ’blink’ and ’sumviewer’ are used for blinking and manipulating raw ccd multiimage tiffs. two or more selected frames from the sequence can be alternatively displayed on the monitor and visually inspected. besides blinking, the programme also allows smoothing, inverting, shifting, co-adding and zooming of image frames. the images taken by ccd camera are then processed through the program ’astrometry’, which has been developed for the reduction of ccd images and automatic identification of stars with usno-b1.0, usno-a2.0 or gsc star catalogues. the images are reduced and all objects with given conditions for signal to noise ratio are found on the image. these objects are then identified with stars from the selected star catalogue. equatorial coordinates of objects are then determined, and at the same time stars with residuals greater than 1 arcsecond are excluded automatically or/and manually and magnitude of the objects is determined. the user then selects the desired object on the image, and the program gives appropriate output data for that object directly in the mpc format. the time of observation and other information needed for the output are derived from the data stored in the header of the image file. information about the processed ccd image (time, filename, frame number, equatorial coordinates of the center of the frame, telescope used, exposure time, position of objects on frame, etc.) are stored in the database of processed ccd images for later use, e.g. for automated precovery program. the residuals of the measured astrometric positions are checked before they are made available to the astronomical community. the calculation of residuals is based on osculating orbital elements of the object near the current epoch, so they are acceptable mainly for the evaluation of observations. in addition the delta-t variation of the mean anomaly is determined. checking of both residuals and the delta-t variation in mean anomaly helps verify object geoinformatics fce ctu 2011 112 tichy m.: klet observatory – european contribution to detecting and tracking of near earth objects identification. all the programs in the system are using the planetary and lunar ephemerides de405 provided by jpl for exact determination of planetary positions. klenot results the klet observatory is one of the world‘s most prolific professional observatories producing follow-up astrometry of neos [7]. the klenot project results presented here were obtained during the first phase of the klenot project from march 2002 to september 2008. it resulted in an important contribution to the international neo effort. the klenot telescope was out of operation due to dome reconstruction between may 2005 – december 2005. neo follow-up we have measured and sent to the minor planet center 52,658 positions of 5,867 objects, including 13,342 positions of 1,369 neas, 222 of which were potentially hazardous asteroids (phas) and 157 of which were virtual impactors (vis) in the time of observations. the majority of measured neas were confirmatory observations and early follow-up observations of newly discovered objects presented on the neo confirmation page (neocp) maintained by the minor planet center. these astrometric observations helped both to confirm new discoveries and to extend their observed arc. confirmatory observations were centered on newly discovered neo candidates fainter than magnitude m(v)=19.5mag. and faster moving objects. long-arc follow up astrometry is devoted also to neas fainter than magnitude m(v)=19.5 mag. special consideration is given to virtual impactors coming from sentry and clomon automatic monitoring systems. in many cases data obtained by the klenot team alone enabled to remove predicted impact solutions. these observations were included in 561 minor planet electronic circulars. most of the near earth asteroids observed have absolute magnitude h between 18 and 21, but some asteroids as intrinsically faint as h = 30 have been observed by the 1.06-m klenot telescope. recovery of neos at the second opposition is another important goal of the klenot project. in the framework of the klenot project we recovered 16 neas (including 2 phas). the absolute majority of astrometric measurements have been sent to the minor planet center immediately. klenot discoveries although searching for unknown neas is just a complementary goal of the klenot project, all obtained images are checked for possible new objects. within the framework of this project 750 small solar system bodies have been discovered up to now. these new discoveries are processed with a special reference to objects with unusual motion. understandably, the majority of them belong to typical main-belt minor planets, although the 3 neas can be considered the most important discoveries of the klenot project. there are two apollotype asteroids 2002 lk and 2006 xr4 and aten-type asteroid 2003 ut55 [9]. geoinformatics fce ctu 2011 113 tichy m.: klet observatory – european contribution to detecting and tracking of near earth objects designation type a [au] e i [deg.] h arc closest earth approach 2002 lk apollo 1.10 0.15 25 24.2 2002 june 1-8 0.023 au 2003 ut55 aten 0.98 0.15 16 26.8 2003 oct. 26-27 0.0074 au 2006 xr4 apollo 1.04 0.27 11 26.2 2006 dec. 15-16 0.00401 au table 1 – klenot discoveries perspectives the next generation surveys, such as pan-starrs, lsst and the discovery channel telescope will change requirements for astrometric follow-up. the role of astrometric follow-up in connection with the new generation of all-sky deeper surveys should move towards faint(er) neos which are in urgent need of astrometric position determinations over a longer arc. additional follow-up observations with other telescopes would also help make linkages in their archives [4]. another important study would be the search for and analysis of possible cometary features of newly discovered bodies to understand their true nature. a fundamental improvement of the klenot telescope was started in autumn 2008. it represents installation of a new computer-controlled equatorial mount for the klenot telescope allowing a better sky coverage, including lower solar elongations. the new mount is designed and it is in the process of being built. the original high-quality optics of the klenot telescope mentioned above will be used. the technical first light and testing observations of the klenot next generation started in autumn 2010. this new mount will substantially increase telescope-time efficiency, the number of observations, their accuracy and limiting magnitude. klenot hardware improvement has been followed also by software improvement. it is rather demanding to teach a computer to recognize moving bodies on astronomical images. thus this task is left to the astronomer, but there is still room for improving limiting magnitude by co-adding of klenot multi-tiff images. the program developed at the klet observatory has been designed to support a human eye, facilitating search for fainter objects in the solar system on our images – even for those not detectable by eye. the images are initially processed for obtaining their center coordinates, then co-added on their common centre, effectively lenghtening the exposure time of the compound image. this strategy may prove useful in a search for fainter slowly moving bodies like tnos. faster moving objects require a different approach. specifying the apparent motion and position angle of such an object permits us to move the images against its predicted motion, and the object emerges from the background noise, much like faint stars on a longer exposure time image. for astrometry of the object to be possible, only a chosen area of the image can be co-added this way. then the first position is found with respect to the reference stars of the first image and every other is computed based on the target body‘s movement. furthermore, since this method allows us to compensate for the target movement, it can also be used to detect cometary features or to search for new fainter cometary fragments. conclusion klenot project significantly participates in follow-up of phas and vis. the klenot telescope is the largest telescope in europe used exclusively for astrometric observations of asteroids and comets, so observations are focused on faint and fast moving objects. klenot geoinformatics fce ctu 2011 114 tichy m.: klet observatory – european contribution to detecting and tracking of near earth objects observations are used to confirm discovery, determine orbit or impact solution of such objects. improvement of klenot telescope will lead to enhanced output from the programme. acknowledgement the work of the klet observatory and the klenot project is funded by the south bohemian region. the klenot project was sponsored also by the grant agency of the czech republic reg. no. 205/98/0266, the 2000 neo shoemaker grant of the planetary society (u.s.a.), and the grant agency of the czech republic reg. no. 205/02/p114. references 1. chamberlin a. b., chesley s. r., chodas p. w., giorgini j. d., keesey m. s., wimberly r. n. and yeomans d. k. 2001. sentry: an automated close approach monitoring system for near-earth objects. bull. amer. astron. soc. 33:1116 2. larson s. 2007. current neo surveys. proceedings of the iau symp 236. pp.323-328. 3. marsden b.g. and williams g. v. 1998. the neo confirmation page. planetary and space science 46:299-302. 4. mcmillan r. s. and the spacewatch team. 2007. spacewatch preparations for the era of deep all-sky surveys. proceedings of the iau symp 236. pp.329-340. 5. milani a., chesley s. r., sansaturio a., tommei g. and valsecchi g. g. 2005. nonlinear impact monitoring: line of variation searches for impactors, icarus 173:362-384 6. ticha j, tichy m. and kocer m. 2002. klenot – klet observatory near earth and other unusual objects observations team and telescope. esa sp-500: acm 2002. pp.793-796. 7. ticha j., tichy m. and kocer m. 2007. neo-related scientific and outreach activities at klenot. proceedings of the iau symp 236. pp.371-376. 8. tichy m., ticha j. and kocer m. 2005. confirmation of comet discoveries. international comet quarterly 27:87-92 9. ticha j., tichy m., kocer m., honkova m. klenot project 2002-2008, meteoritics and planetary science, vol. 44 (2009), issue 12, p.1889-1895 10. http://neo.jpl.nasa.gov – nasa neo office 11. http://minorplanetcenter.net/iau/mpc.html – minor planet center geoinformatics fce ctu 2011 115 http://neo.jpl.nasa.gov http://minorplanetcenter.net/iau/mpc.html geoinformatics fce ctu 2011 116 prostorové rozhraní is malé obce řešené v oss prostorové rozhrańı informačńıho systému malé obce řešené v open source software karel jedlička department of mathematics, geomatics section, faculty of applied sciences, university of west bohemia, e-mail: smrcek@kma.zcu.cz autor je podporován výzkumným záměrem mšm 497775130. jakub orálek department of mathematics, geomatics section, faculty of applied sciences, university of west bohemia, e-mail: fugas9@seznam.cz kĺıčová slova: informačńı systém malé obce, geografické databáze, geografický informačńı systém, open source, postgresql, postgis, umn mapserver, jump, qgis, výmenný formát katastru nemovitost́ı (vf iskn). abstrakt př́ıspěvek si klade za ćıl představit možnosti open source software pro implementaci prostorového rozhrańı informačńıho systému malé obce. zabývá se návrhem projektu po jednotlivých částech: identifikace požadavk̊u zastupitelského úřadu (uživatele systému), popis obecné architektury systému a volba vhodných (nekomerčńıch) technologíı pro jeho implementaci. součást́ı projektu je i popis vyvinuté technologie pro import nejd̊uležitěǰśıch datových vrstev (informaćı o vlastnictv́ı) do systému. článek je doplněn výčtem využitelných datových zdroj̊u pro informačńı systém malé obce v české republice. úvod ćılem projektu je návrh finančně nenáročného řešeńı informačńıho systému pro správu a evidenci městského majetku a souvisej́ıćıch agend, který má sloužit jak pro zjednodušeńı administrativńı práce zastupitelského úřadu, tak i pro podporu jeho rozhodováńı. projekt má tři fáze: � prvńı fáźı je identifikace obecných požadavk̊u zastupitelstva menš́ı obce na informačńı systém, která byla provedena v rámci práce novotného 2005). � druhá fáze spoč́ıvá ve volbě vhodných technologíı, návrhu jejich využit́ı a implementace obecného prostorového i atributového rozhrańı. důraz je přitom kladen na robustnost, bezpečnost, ńızkou nákladovost a v neposledńı řadě na uživatelskou jednoduchost zvoleného řešeńı. geinformatics fce ctu 2006 129 prostorové rozhraní is malé obce řešené v oss � navazuj́ıćı třet́ı fáze plńı vytvořený systém existuj́ıćımi prostorovými i atributovými datovými zdroji. jej́ı podstatou je návrh a implementace plynulého přechodu (či propojeńı) od stávaj́ıćıch zp̊usob̊u správy a evidence obecńıho majetku (a souvisej́ıćıch agend) k digitálńımu řešeńı. lze ji rozdělit na dvě části. prvńı se zabývá datovými zdroji standardizovanými na státńı úrovni (exterńı zdroje), druhá interńımi datovými zdroji obce. obecná architektura systému uživatelské požadavky informačńı systém obce využ́ıvaj́ı hlavně pracovńıci obecńıho úřadu a občané. informačńı systém by měl usnadnit ř́ızeńı obce (správa věćı veřejných, služba veřejnosti), propagace obce, ekonomické ř́ızeńı. správa věćı veřejných zahrnuje: � správu a údržbu komunikaćı, � územńı rozvoj a plánováńı, � čistotu v obci, � správu majetku, � bezpečnost, � údržbu sportovńıch a kulturńıch zař́ızeńı, � zaváděńı infrastruktury a inženýrských śıt́ı. př́ıkladem služby veřejnosti potom může být: � podáváńı informaćı, � zajǐst’ováńı dopravńı obslužnosti, � zajǐst’ováńı komunikace mezi státem a občanem, � napomáháńı řešeńı problémů občana. zpracováno podle novotného (2005). struktura informačńı systém malé obce (dále ismo) je systém, v rámci kterého lze definovat čtyři hlavńı skupiny uživatel̊u, lze ř́ıci uživatelské role: � administrátor systému – role rozdělená mezi dodavatele technologíı a pracovńıka zastupitelského úřadu. dodavatel technologíı instaluje a zprovozńı systém, dále proškoĺı mı́stńıho administrátora v běžných provozńıch úkonech. dodavatel technologíı poté zařizuje pouze méně běžné úpravy systému. přesné rozděleńı této role mezi zastupitelský úřad a dodavatele, záviśı na mı́stńıch podmı́nkách. geinformatics fce ctu 2006 130 prostorové rozhraní is malé obce řešené v oss � správce a editor dat – úředńık nebo skupina úředńık̊u mı́stńıho úřadu. role spoč́ıvá v udržováńı dat systému v aktuálńım a konzistentńım stavu. prováděno je to kontinuálńı aktualizaćı interńıch dat obce (např. registr obyvatel, pozemk̊u a budov, územńı plán, technická mapa) a pravidelnou dávkovou aktualizaćı dat z exterńıch zdroj̊u (dnes je typickým př́ıkladem katastrálńı mapa). � dodavatel exterńıch dat – exterńı organizace dodávaj́ıćı datové zdroje, např. katastrálńı mapu, ortofotomapu, základńı mapa, atp., v́ıce v kapitole datové zdroje. � ostatńı uživatelé – většinou obyvatelé obce, ale i daľśı zájemci, např. úřady vyšš́ıch územně správńıch celk̊u. jedná se většinou o pasivńı roli př́ıjemc̊u informace, ovšem i tito uživatelé maj́ı možnost ovlivňovat systém. většinou však pouze zprostředkovaně, přes upozorněńı na př́ıpadné nesrovnalosti odeslané správci dat, který je povinen na připomı́nku reagovat. pro návrh systému maj́ıćı výše uvedené role, lze s úspěchem použ́ıt distribuované prostřed́ı poč́ıtačových śıt́ı a klient server řešeńı, resp. jeho tř́ıvrstvou variantu (prezentačńı vrstva – uživatelské rozhrańı; aplikačńı vrstva – funkcionalita, dávaj́ıćı sw jeho charakter; vrstva služeb – nejčastěji databázová vrstva), která je znázorněna na obrázku 1. daľśı informace o v́ıcevrstvých architekturách lze nalézt např. v prácei fastie (1999) a sei (2005). obr. 1.: obecné schéma technologického řešeńı ismo – zakresleno v unified modeling language (uml). vı́ce o uml lze nalézt page-jones (2001) základńı komponentou distribuovaného systému je relačńı databáze. protože ismo je informačńım systémem o územı́, databáze muśı umožňovat ukládat nejen atributová, ale i progeinformatics fce ctu 2006 131 prostorové rozhraní is malé obce řešené v oss storová data (dohromady nazývaná geografická data). aby nedocházelo k porušeńı integrity mezi atributovými a prostorovými daty, je celá geografická databáze př́ıstupná přes jednotné rozhrańı, které je definováno systémem ř́ızeńı báze dat (sřbd) a nástroji pro správu prostorových dat. zmiňované rozhrańı je postaveno na některé z technologíı př́ıpojeńı k databázi, at’ již proprietálńı nebo standardizované a je v obr. 1 označeno zkratkou xdbc. silný klient je software, který v obsahuje aplikačńı logiku provázanou s uživatelským rozhrańım – převzato z fastie (1999). klient se připojuje př́ımo k databázové vrstvě přes definované rozhrańı. komunikace prob́ıhá v intranetu či v internetu, proto je volen standardńı śıt’ový protokol tcp/ip. tenký klient (typicky webový prohĺıžeč) obsahuje pouze prezentačńı vrstvu, která přes definované rozhrańı, pro śıt’ové prostřed́ı většinou http(s) protokol, komunikuje s aplikaćı, či aplikačńı vrstvou. aplikačńı vrstva je sada mezi sebou komunikuj́ıćıch aplikaćı – aplikačńıch server̊u. ty mezi sebou komunikuj́ı protokoly, které jsou závislé na typech jednotlivých server̊u. vrstva se obvykle skládá z webového serveru, který navenek komunikuje s tenkým klientem a předává jeho požadavek daľśım server̊um ve vrstvě (např. mapovému serveru pro požadavek na geografická data). aplikačńı server požadavek zpracuje, v př́ıpadě potřeby komunikuje s databázovou vrstvou, a výsledek předává webovému serveru. ten odeśılá data zpět klientovi. každá z aplikaćı přitom komunikuje přes stanovená rozhrańı (protokoly). ismo je nutno definovat jako otevřený systém, který umožňuje i práci s exterńımi zdroji (v́ıce viz kapitola datové zdroje). přehled využitelných technologíı pro implementaci systému lze obecně využ́ıt open source technologíı, komerčńıch technologíı nebo jejich kombinace. vzhledem k požadavku na ńızkonákladové řešeńı, jsou voleny technologie open source. jejich přehled následuje. databáze a prostorová rozš́ı̌reńı co se týče klasických databáźı na bázi open source, existuje několik kvalitńıch produkt̊u. z těch známěǰśıch lze uvést mysql [8], postgresql [7], maxdb, firebird a ingres. lze ř́ıci, že v současné době je nejrozš́ı̌reněǰśı atributová databáze mysql. v informačńım systému obce je ovšem kladen d̊uraz na podporu prostorových dat a tam je nejsilněǰśı databáze postgresql, resp. jej́ı rozš́ı̌reńı postgis. maxdb, firebird ani ingres prostorová data nepodporuj́ı. mysql ve své posledńı verzi nab́ıźı velmi dobré prostorové rozš́ı̌reńı, v mnoha ohledech srovnatelné s postgis. využ́ıvá stejných objektových model̊u jako postgis. hlavńı výhodou postgresql/postgis oproti mysql je větš́ı podpora open source silných klient̊u, možnost ukládat 3d i 4d data a větš́ı počet vestavěných funkćı pracuj́ıćıch s prostorovými daty. mapové servery existuje několik open source mapových server̊u: map server, geoserver, alov map, mapit!, mapzoom, jshape. mapové servery mapit! a jshape nejsou př́ılǐs vhodné, jelikož podporuj́ı velmi málo formát̊u (jshape pracuje pouze s shapefile a mapit! jen s rastrovými daty). mapzoom je velmi jednoduchý nástroj a slouž́ı pouze pro tvorbu statických map. jako zaj́ımavý geinformatics fce ctu 2006 132 prostorové rozhraní is malé obce řešené v oss mapový server se jev́ı alov map, který umı́ pracovat s vektorovými i rastrovými daty v souborové i databázové (mysql) formě, podporuje wms. veškeré tyto možnosti poskytuje také umn map server. ten nav́ıc disponuje větš́ı podporou komunikačńıch protokol̊u (např. wfs), databázových systémů (např. postgresql), výstup̊u (např. pdf) a v neposledńı řadě obsahuje přehledněǰśı a obsáhleǰśı dokumentaci. webové servery existuje velké množstv́ı open source webových server̊u. jigsaw a tomcat jsou napsány kompletně v jazyku java. jejich hlavńı prioritou je zavedeńı webových technologíı využ́ıvaj́ıćıch tento jazyk (mluv́ı se o tzv. javových serverech). hlavńı výhodou programu jetty (napsaný také v jazyku java) je kombinace klasického http serveru s javovým serverem. nejrozš́ı̌reněǰśım webovým serverem na bázi open source je apache (napsaný v jazyku c). podporuje velké množstv́ı technologíı (např. php, cgi), ssl, lze ho propojit s databáźı postgresql. výhodou je také velké množstv́ı dokumentace a webových fór. software pro vizualizaci geodat grass (geographic resources analysis support system) je profesionálńı gis jehož hlavńı śıla je v prostorových, zejména rastrových analýzách. grass podporuje standardy wms a wfs, možné je také propojeńı s postgis. pro menš́ı obce je ovšem tento software zbytečně komplexńı a složitý na ovládáńı. udig (user-friendly desktop internet gis) lze použ́ıt jednak jako geoprostorovou aplikaci i pro vytvářeńı nových odvozených aplikaćı. podporuje standardy wms a wfs a propojeńı s postgis databáźı. nevýhodou je nedostatek rozšǐruj́ıćıch modul̊u dostupných na internetu a podpora malého množstv́ı datových formát̊u. jump (java unified mapping platform) je celý napsán v jazyku java a mezi jeho hlavńı výhody patř́ı možnost rozš́ı̌reńı o nové moduly, možnost provozu nezávisle na operačńım systému. př́ıdavné moduly nab́ızej́ı např́ıklad právě propojeńı s databáźı postgis, podporu wfs (standard wms je podporován př́ımo), podporu ims. na internetu je možné źıskat mnoho daľśıch rozšǐruj́ıćıch modul̊u. quantum gis (qgis) umožňuje propojeńı s postgis a pomoćı př́ıdavného modulu je zajǐstěna podpora wms. dı́ky rozšǐruj́ıćım modul̊um lze do quantum gis přidávat nové funkce a nástroje. požadavek na jednoduchou práci v systému a na jeho bezpečnost vede k volbě technologíı, pro které existuje silný vývojový tým, jsou prověřené úspěšným nasazeńım a existuj́ı pro ně rozsáhlé dokumentačńı materiály a návody na použit́ı. př́ıkladem takového open source řešeńı je i autory článku zvolená kombinace technologíı databáze posgresql s prostorovým rozš́ı̌reńım postgis, mapovým serverem umn mapserver, webovým serverem apache, tenkým klientem v podobě webového prohĺıžeče (s podporou javascriptu) a silnýmmi klienty jump a qgis, které popisuje následuj́ıćı kapitola. geinformatics fce ctu 2006 133 prostorové rozhraní is malé obce řešené v oss obr. 2.: schéma zvoleného open source technologického řešeńı informačńıho systému malé obce. zvolené open source řešeńı zvolené open source řešeńı ismo je složeno z technologíı zobrazených na obrázku 2 a popsaných ńıže. jako relačńı databáze s možnost́ı ukládáńı prostorových dat zde slouž́ı postgresql s prostorovým rozš́ı̌reńım postgis. databáze komunikuje přes rozhrańı odbc a jazyk sql3 bud’ př́ımo se silnými klienty (jump/qgis) nebo s aplikačńı vrstvou, která je tvořena mapovým serverem umn mapserver a webovým serverem apache, který následně přes http(s) protokol komunikuje s tenkým klientem (webovým prohĺıžečem). postgresql postgresql (http://www.postgresql.org1) je open source relačńı databázový systém pro práci s klasickými atributovými daty. může být provozován na operačńıch systémech linux, unix, windows a mac os x. plně podporuje ciźı kĺıče, spojováńı tabulek, pohledy, triggery a uložené procedury. zahrnuje v sobě většinu datových typ̊u z sql92 a sql99. podporuje ukládáńı velkých binárńıch objekt̊u, jako jsou obrázky, zvuky nebo video. př́ıkazy v postgresql jsou zadávány pomoćı př́ıkazového řádku. jako pomocnou aplikaci lze použ́ıt pgadmin iii, který k databázi poskytuje grafické uživatelské prostřed́ı. pgadmin iii podporuje stejné operačńı systémy jako postgresql. s aplikačńı vrstvou (mapovým serverem, či silným klientem – 1 http://www.postgresql.org geinformatics fce ctu 2006 134 http://www.postgresql.org prostorové rozhraní is malé obce řešené v oss jump, qgis) komunikuje pomoćı śıt’ového protokolu tcp/ip přes rozhrańı odbc jazykem sql 3. podrobnosti lze nalézt v dokumentaci postgresql (2005). postgis postgis (http://www.postgis.org2) přidává podporu geografickým objektovým datovým typům do postgresql. vycháźı z ogc 99-049 (1999). konkrétně využ́ıvá objektové datové typy: point, linestring, polygon, multi point, multilinestring, multipolygon, geometrycollection. dále obsahuje několik set prostorových funkćı. každý objekt použitý v postgis lze vytvořit formou wkt (well-known-text) nebo wkb (well-known-binary). v databázi jsou potom prostorové informace uloženy v binárńım tvaru. podrobněǰśı informace o práci v postgis poskytuje refraction research (2006). umn mapserver umn mapserver (http://mapserver.gis.umn.edu3) je program vhodný pro tvorbu internetových mapových aplikaćı. lze jej použ́ıvat na operačńıch systémech linux, windows a mac os x. mapserver může na základě parametr̊u obdržených od webového serveru vytvořit obraz mapy nebo pomoćı šablony vytvořit html dokument s obrazem mapy. spojeńım mapserveru s dhtml, php, javascriptem nebo jinými programovaćımi jazyky mohou vzniknout velmi zaj́ımavé interaktivńı mapy s mnoha funkcemi. apache webserver apache je webový (http) server (http://www.apache.org4), který lze spustit na operačńıch systémech linux, windows, unix a mac os x. jak plyne již z jeho názvu, komunikuje s klientem přes http protokol. jump jump – java unified mapping platform (http://jump-project.org/5) je aplikace založená na grafickém uživatelském rozhrańı, slouž́ı k zobrazeńı a úpravě geografických dat. zahrnuje mnoho funkćı pro základńı práci s geodaty. je navržen tak, aby ho bylo možné snadno rozš́ı̌rit a vyv́ıjet. celý program je napsán v jazyku java, což má několik výhod: jump spust́ı uživatelé jakéhokoli operačńıho systému, nab́ıźı př́ıstup ke všem svým funkćım a je snadno rozšǐritelný. vı́ce se lze doč́ıst v publikaci vivid solutions (2003).mezi základńı rysy jump patř́ı: � podpora formát̊u gml, esri shapefile, wkt, � podpora postgis zajǐstěna přidaným modulem, � možnost zobrazovat atributy a geometrické souřadnice vybraným prvk̊um, 2 http://www.postgis.org 3 http://mapserver.gis.umn.edu 4 http://www.apache.org 5 http://jump-project.org/ geinformatics fce ctu 2006 135 http://www.postgis.org http://mapserver.gis.umn.edu http://www.apache.org http://jump-project.org/ prostorové rozhraní is malé obce řešené v oss � změna symbologie, � popisy geografických prvk̊u, � editace geometrie i atribut̊u, � wms klient, � možnost provádět některé analýzy, � snadná rozšǐritelnost. quantum gis (qgis) qgis (http://qgis.org6) je aplikace, která slouž́ı pro zobrazeńı geografických dat. obsahuje mnoho běžných prostorových funkćı. nové funkce a nástroje lze vkládat pomoćı př́ıdavných modul̊u. lze ji spustit na operačńıch systémech linux, unix, mac os x, windows. qgis podrobně popisuje sherman (2005). základńı rysy quantum gis jsou: � podpora všech vektorových formát̊u obsažených v knihovně ogr, � podpora všech rastrových formát̊u obsažených v knihovně gdal, � propojeńı s postgis databáźı, � propojeńı s grass a možnost provádět prostorové analýzy, � lze zobrazovat atributové tabulky vybraným geografickým prvk̊um, � změna symbologie, � popisy geografických prvk̊u, � export do map souboru použ́ıvaného mapserverem, � možnosti rozš́ı̌reńı o př́ıdavné moduly. webový prohĺıžeč a skripty na straně klienta existuje několik webových prohĺıžeč̊u (např. internet explorer, mozilla firefox, opera, netscape, safari). všechny komunikuj́ı se serverem přes http protokol a jejich základem je schopnost zobrazovat html dokumenty. s rozšǐrováńım internetu se rozš́ı̌rily i technologie podporované prohĺıžeči. např. možnost vytvářet formuláře, vkládat zvukové a video soubory, flash animace. statické html lze rozš́ı̌rit o některé skriptovaćı jazyky běž́ıćı na straně klienta (webový prohĺıžeč). nejpouž́ıvaněǰśı jsou java script a vb script. tyto skriptovaćı jazyky umožňuj́ı na základě události (najet́ı kurzoru myši na objekt, kliknut́ı, načteńı stránky, atd.) dynamicky měnit html dokument. 6 http://qgis.org geinformatics fce ctu 2006 136 http://qgis.org prostorové rozhraní is malé obce řešené v oss datové zdroje datové zdroje pro ismo lze rozdělit do dvou skupin: exterńı a interńı. exterńı zdroje jsou většinou standardizovány na státńı (někdy krajské) úrovni a obci slouž́ı hlavně jako podkladová data. do ismo jsou bud’to importovány nebo zapojeny přes standardizované webové služby, v př́ıpadě geografických dat přes web map service (wms) nebo web feature service (wfs) definované v ogc 05-008 (2005). wms definuje mapu jako zobrazeńı geografické informace jako digitálńı obrazový soubor. mapa a samotná data jsou něco jiného. mapy vytvořené podle specifikace wms jsou prezentována v rastrových nebo vektorových formátech (např. svg, cgm). oproti tomu wfs poskytuje př́ımo zdrojová geodata ve formátu gml. exterńı fundamentálńım exterńım zdrojem pro ismo jsou informace o vlastnictv́ı a už́ıváńı nemovitého majetku v katastru obce. tato data poskytuje český úřad zeměměřický a katastrálńı (dále čúzk) v několika formách v závislosti na zp̊usobu vedeńı katastrálńıho operátu v konkrétńım územı́: � digitálńı katastrálńı mapa (dkm) včetně souboru popisných informaćıch (spi) ve výměnném formátu informačńıho systému katastru nemovitost́ı (vf iskn) (v́ıce viz. čúzk b). popis importu těchto dat do ismo následuje v podkapitole import vf iskn. � hybridńı katastrálńı mapa; geometrie parcel je předávána v rastrovém formátu *.cit (v́ıce viz. čúzk a), definičńı body parcel předávané ve vektorovém formátu, spi ve vf iskn. � analogová katastrálńı mapa; data je nutno nascannovat a ve vektorové formě vytvořit minimálně definičńı body parcel, spi ve vf iskn (digitalizace spi dokončena v roce 1998). popis importu druhých dvou verźı katastrálńıch dat je popsán v podkapitole import analogové či hybridńı katastrálńı mapy. katastrálńı data jsou podle vyhlášky o poskytováńı údaj̊u z katastru nemovitost́ı české republiky, č.162/2001 poskytována samosprávám zdarma. mezi daľśı významné exterńı zdroje dat patř́ı servery, poskytuj́ıćı prostorová data: cenia česká informačńı agentura životńıho prostřed́ı (cenia) vytvořila mapový server http://geoportal.cenia.cz7 který se stal d̊uležitou součást́ı portálu veřejné správy. mapový server nab́ıźı mapy formou samostatné internetové aplikace i pomoćı mapových služeb wms a arcims. poskytovaná data se vztahuj́ı k celé české republice a jsou pravidelně aktualizována. konkrétně zde lze nalézt několik tématických map z oblasti geologie, životńıho prostřed́ı nebo územně správńı členěńı, topografické mapy čr, ortofotomapu, mapy týkaj́ıćı 7 http://geoportal.cenia.cz geinformatics fce ctu 2006 137 http://geoportal.cenia.cz prostorové rozhraní is malé obce řešené v oss se obyvatel. např. ortofotomapu nebo geologickou mapu mohou využ́ıt všechny obce čr, naproti tomu např. mapu chráněných územı́ využij́ı obce, do kterých chráněná územı́ zasahuj́ı. ne př́ılǐs potřebné pro obce jsou naopak některé tématické mapy maj́ıćı nejmenš́ı jednotku jedno katastrálńı územı́ (např. hustota obyvatel). izgard internetový zobrazovač geografických armádńıch dat http://arwen.ceu.cz/website/startovani8 je projekt zpř́ıstupňuj́ıćı data vojenského geografického informačńıho systému (vgis). byl vytvořen vojenským geografickým a hydrometeorologickým úřadem se śıdlem v dobrušce. poskytuje digitálńı atlas čr, letecké měřické sńımky a letecké sńımky z povodńı. digitálńı atlas mohou využ́ıt všechny obce, naproti tomu sńımky z povodńı jsou užitečné předevš́ım obce postižené záplavami. všechna data jsou dostupná pouze pomoćı mapové služby arcims a ta neńı většinou v open source programech podporována. výjimku tvoř́ı program jump, pro který podpora arcims již existuje formou rozšǐruj́ıćıho modulu. úhúl ústav pro hospodářskou úpravu les̊u http://www.uhul.cz9 je ř́ızen ministerstvem zemědělstv́ı. tento úřad provád́ı velké množstv́ı činnost́ı vztahuj́ıćı se k les̊um celé čr. jedná se předevš́ım o inventarizaci les̊u, oblastńı plány rozvoje les̊u, informačńı a datové centrum a typologii les̊u. mimo to poskytuje několik mapových projekt̊u formou wms a wfs. často se jedná o ne př́ılǐs podrobná data (např. mapa eroze p̊udy) nebo o data vztahuj́ıćı se jen k malé části republiky (např. úprava chemismu p̊ud), takže z celkového pohledu je obce př́ılǐs nevyužij́ı. nacházej́ı se zde ale i data vhodná pro velké množstv́ı obćı (např. oblastńı plány rozvoje les̊u). interńı interńı datové zdroje jsou data, jejichž správu a aktualizaci má na starosti obec. jejich struktura byla analyzována v práci novotného 2005, zde je uveden jejich stručný výčet: � majetková a finančńı agenda, � rozpočet, � poplatky od občan̊u, � registr obyvatel, pozemk̊u a budov, � technická mapa, � územńı plán. struktura interńıch datových zdroj̊u má převážně atributový charakter (výjimku tvoř́ı technická mapa města a územńı plán) a může se velmi lǐsit (předevš́ım je to otázka investovaných peněz) obec od obce. návrh importu a propojeńı interńıch dat obce je tak silně závislý na konkrétńım územı́ a přesahuje rámec tohoto textu. 8 http://arwen.ceu.cz/website/startovani 9 http://www.uhul.cz geinformatics fce ctu 2006 138 http://arwen.ceu.cz/website/startovani http://www.uhul.cz prostorové rozhraní is malé obce řešené v oss import vf iskn importem dat výměnného formátu iskn do datových báźı, které jsou podporovány open source software, databáźı se zabýval např. landa (2005), který ukládal atributová data do databáze postgresql a vektorová data do nativńıho formátu gis grass. součást́ı návrhu ismo je implementace prostorových i atributových dat vf iskn do postgis. výměnný formát informačńıho systému katastru nemovitost́ı je také nazýván novým výměnným formátem (nvf). jeho import do databáze postgresql/postgis je zajǐstěn aplikaćı vytvořenou v rámci projektu v jazyce python. k tomu bylo zapotřeb́ı propojit python s databáźı přes standardńı rozhrańı odbc. před spuštěńım programu je potřeba mı́t vytvořenou postgis databázi. uživatel při importu zadává pouze název propojeńı odbc, uživatelské jméno, heslo, název souboru ve formátu nvf (je umı́stěn v kořenovém adresáři aplikace) a ćılovou databázi v postgis. z programátorského hlediska lze program rozdělit do několika krok̊u: � pokud neńı existuj́ıćı databáze prázdná vymažou se všechny jej́ı tabulky (kromě dvou standardńıch postgis tabulek(spatial ref sys, geometry columns). � vytvořeńı tabulek podle struktury iskn a jejich naplněńı atributovými daty. � vytvořeńı dvou soubor̊u obsahuj́ı sql př́ıkazy pro tvorbu primárńıch resp. ciźıch kĺıč̊u. � vytvořeńı primárńıch a poté ciźıch kĺıč̊u v databázi. � vytvořeńı geometrie podle ogc specifikaćı ve vybraných tabulkách (viz. obr. 3 – datový model). výsledný datový model (jehož nejd̊uležitěǰśı části ukazuje obr. 3) obsahuje několik deśıtek tabulek, z nichž tři obsahuj́ı kromě atributových dat i geometrii. hranice parcel (hp) a daľśı prvky mapy (dpm) maj́ı geometrii typu linestring, parcely (par) jsou typu polygon. geometrii k daľśım tabulkám lze jednoduše doprogramovat analogicky jako v těchto třech př́ıpadech. např. k tabulce budov (bud) by bylo možné přidat geometrii polygonu. geometrie u jednotlivých tabulek vycházej́ı ze souřadnic uložených v tabulce bod̊u polohopisu (sobr). souřadnice bod̊u se však nacházej́ı i v jiných tabulkách (např. dpm, sbm, op), které je možno využ́ıt při přidáváńı daľśı geometrie. pro obce je d̊uležité vizualizovat předevš́ım vlastnické vztahy (které souviśı s tabulkou parcel), proto nejsou do databáze daľśı prostorové sloupce přidány. kv̊uli snadněǰśı vizualizaci pomoćı silných klient̊u je dobré předpřipravit data v databázi. jako ukázka v tomto datovém modelu poslouž́ı přidané sloupce název druhu pozemku a název zp̊usobu využit́ı pozemku z tabulky druhu pozemku (drupoz), resp. zp̊usobu využit́ı pozemku (zpvypo) do tabulky parcel (par). v podstatě se jedná o “překoṕırováńı” těchto sloupc̊u na základě vazby mezi tabulkami. import analogové či hybridńı katastrálńı mapy při importu analogové mapy do ismo je potřeba uskutečnit tři hlavńı kroky: geinformatics fce ctu 2006 139 prostorové rozhraní is malé obce řešené v oss obr. 3.: stěžejńı část výsledného datového modelu po importu vf. � scannováńı rastr̊u pk, � trasformace do s-jtsk, � vektorizace vztažných bod̊u parcel. scannované mapy je nutné uložit ve formátu podporovaném mapovým serverem (většina běžných rastrových formát̊u). před transformaćı do s-jtsk je třeba vźıt v úvahu srážku mapy, která může činit až několik procent. každý mapový list měńı sv̊uj tvar rozd́ılně a návaznost jednotlivých mapových list̊u je základńı požadavek při transformaci. deformaci mapy lze popsat pomoćı interpolačńıch ploch určených svým okrajem. transformace do s-jtsk se provád́ı z mı́stńıho souřadnicového systému. existuje několik metod transformace; pro účely použit́ı v ismo lze použ́ıt projektivńı transformaci. podrobněǰśı informaci o digitalizaci analogových map uvád́ı čada (2003). vektorizace vztažných bod̊u parcel se provád́ı na základě transformovaného rastrového podkladu a definičńı body parcel se ukládaj́ı př́ımo do postgis. při importu hybridńı mapy je již podkladová vektorová a rastrová vrstva k dispozici. je pouze nutné vektorová data importovat z p̊uvodńıho vektorového formátu do databáze postgis (to lze zajistit standardńımi funkcemi). rastry jsou udržovány v souborovém systému serveru ve formě, která umožňuje jejich zpř́ıstupněńı klientovi spolu s ostatńımi daty. geinformatics fce ctu 2006 140 prostorové rozhraní is malé obce řešené v oss závěrečná fáze importu je pro analogovou i hybridńı mapu stejná. tvorbou ciźıch kĺıč̊u je třeba zajistit vazbu mezi definičńımi body parcel a spi. závěr ćılem př́ıspěvku bylo vytvořit metodiku pro implementaci prostorového rozhrańı informačńıho systému obce. př́ıspěvek je součást́ı projektu tvorby informačńıho systému, který by mohl být využ́ıván menš́ımi obcemi. z hlediska návrhu architektury technologického řešeńı ismo lze vybudováńı prostorového rozhrańı označit za stěžejńı piĺı̌r, protože pokud navrhovaná architektura umožńı práci s geografickými daty, je dostatečně robustńı a funkčńı i pro práci s libovolným typem atributových dat. atributové rozhrańı je často velmi variabilńı a je závislé na zp̊usobu vedeńı agend v konkrétńı obci a v rámci př́ıspěvku neńı detailně rozeb́ıráno. řešeńı postavené na open source softwarových technologíıch bylo zvoleno ze dvou d̊uvod̊u: � jednotlivé technologie jsou k dispozici zdarma, � pokud je vybrána vhodná kombinace technologíı, je již v dnešńı době k dispozici dostatečná dokumentace, a to i v českém jazyce. tyto dva fakty zp̊usobuj́ı, že instalace jednotlivých technologíı a správa výsledného řešeńı je relativně jednoduchá. obec si ji může dělat sama a nebo (častěǰśı př́ıpad) si najmout firmu, která za přijatelnou cenu řešeńı navrhne, implementuje a následně provád́ı kvalifikovanou údržbu systému (viz. popis roĺı systému v podkapitole struktura). kĺıčovou část́ı prostorového rozhrańı je agenda informuj́ıćı obec o stavu nemovitého majetku na jej́ım územı́ (data katastru nemovitost́ı), informace o ochranných pásmech (životńı prostřed́ı, památková péče, vodńı zdroje, atp.), vize rozvoje obce (územńı plán, geografická poloha obce). v př́ıspěvku jsou navržené datové zdroje (interńı a exterńı), ze kterých je možné informace źıskat. v podstatě jsou možné tři cesty práce s podkladovými daty v ismo: � periodický import aktualizovaných dat – použito pro data katastru nemovitost́ı, pomoćı modulu import vf iskn, př́ıpadně daľśıch standardńıch funkćı navrhovaného řešeńı. � využit́ı některého z formát̊u internetové mapové služby (nejčastěji wms či wfs) – požadovaná data se připoj́ı do klienta ismo přes standardńı protokol. využito pro ostatńı data z exterńıch zdroj̊u. � import a následná správa dat př́ımo v ismo – lze využ́ıt zejména pro postupný převod dat z existuj́ıćıch (převážně atributových) analogových i digitálńıch datových báźı obce. autoři se domńıvaj́ı, že navrhovaný zp̊usob implementace ismo je dobrou alternativou k již existuj́ıćım (většinou komerčńım) řešeńım. nezast́ıraj́ı ovšem, že zejména v některých fáźıch př́ıpravy dat (transformaćı), je třeba využ́ıt komerčńı technologie. zároveň se ukazuje jako vhodná alternativa neomezováńı operačńıho systému pouze na nekomerčńı variantu (linux), ale ponechat rozhodnut́ı o operačńım systému (jak na serverovém tak klientském hardware) na konkrétńıch možnostech a schopnostech uživatel̊u/administrátor̊u. geinformatics fce ctu 2006 141 prostorové rozhraní is malé obce řešené v oss reference 1. cenia. portál veřejné správy české republiky. online10 2. čúzk a (český úřad zeměměřičský a katastrálńı). rastrová data katastrálńıch map. online11 [cit. 2006-05-05]. 3. čúzk b (český úřad zeměměřičský a katastrálńı). výstupy dat iskn ve výměnných formátech. online12 [cit. 2006-05-03]. 4. fastie, will. understanding client/server computing. in pc magazine: “enterprize computing”. page 229-230. 1999. online13 [cit. 2006-4-30]. 5. novotný, jǐŕı, čerba, otakar. informačńı systém malé obce. západočeská univerzita v plzni 2005. 6. ogc 05-008. opengis® web services common specification. open geospatial consortium. version: 1.0.0. 2005. 7. ogc 99-049. opengis simple features specification for sql. open gis consortium, inc. 1999. online14 [cit. 2006-05-04]. 8. page-jones, meilir. voráček, karel. základy objektově orientovaného návrhu v uml. 1. vyd. praha : 2001. isbn 80-247-0210-x. 9. the postgresql global development group. postgresql 8.0.7 documentation. 2005. online15 [cit. 2006-02-16]. 10. sei (software engineering institute), carnegie mellon university. client/server software architectures: an overview. 2005. online16 . [cit. 2006-4-30]. 11. sherman, gary e. quantum gis user guide. 2005. version 0.7. online17 [cit. 200605-09]. 12. ústav pro hospodářskou úpravu les̊u (úhúl). mapový server úhúl. online18 [cit. 2006-05-09]. 13. vivid solutions, inc. jump unified mapping platform data sheet. 2003. version 1.0. . [cit. 2006-03-29]. 14. vojenský geografický a hydrometeorologický úřad (vghmúř). izgard. online19 [cit. 2006-05-09]. 10 http://geoportal.cenia.cz/mapmaker/cenia/portal/ 11 http://www.cuzk.cz/dokument.aspx?prareskod=998&menuid=0&akce=doc:10-rastr datakm 12 http://www.cuzk.cz/dokument.aspx?prareskod=998&menuid=0&akce=doc:10-vystupy dat iskn vformaty 13 http://www.officewizard.com/books/clientserver/clientservercomputing.htm 14 http://www.opengeospatial.org/docs/99-049.pdf 15 http://www.postgresql.org/docs/8.0/interactive/index.html 16 http://www.sei.cmu.edu/str/descriptions/clientserver body.html 17 http://qgis.org/releases/userguide.pdf 18 http://212.158.143.149/index.php 19 http://arwen.ceu.cz/website/startovani geinformatics fce ctu 2006 142 http://geoportal.cenia.cz/mapmaker/cenia/portal/ http://www.cuzk.cz/dokument.aspx?prareskod=998&menuid=0&akce=doc:10-rastr_datakm http://www.cuzk.cz/dokument.aspx?prareskod=998&menuid=0&akce=doc:10-vystupy_dat_iskn_vformaty http://www.officewizard.com/books/clientserver/clientservercomputing.htm http://www.opengeospatial.org/docs/99-049.pdf http://www.postgresql.org/docs/8.0/interactive/index.html http://www.sei.cmu.edu/str/descriptions/clientserver_body.html http://qgis.org/releases/userguide.pdf http://212.158.143.149/index.php http://arwen.ceu.cz/website/startovani prostorové rozhraní is malé obce řešené v oss 15. vyhláška o poskytováńı údaj̊u z katastru nemovitost́ı české republiky, č.162/2001. online20 [cit. 2006-05-01]. 20 http://www.podnikame.cz/zakony01/index.php3?co=z2001162 geinformatics fce ctu 2006 143 http://www.podnikame.cz/zakony01/index.php3?co=z2001162 http://www.podnikame.cz/zakony01/index.php3?co=z2001162 a grass gis application for vertical sorting of sediments analysis in river dynamics annalisa minelli università degli studi di perugia – dipartimento di ingegneria civile e ambientale via g. duranti, 93 – 06125 perugia (italy) email: annalisa.minelli unipg.it gary parker university of illinois at urbana-champaign department of civil and environmental engineering department of geology 205n mathews avenue 61801 urbana, il (usa) email: parkerg illinois.edu paolo tacconi università degli studi di perugia – dipartimento di ingegneria civile e ambientale via g. duranti, 93 – 06125 perugia (italy) email: ptacconi unipg.it corrado cencetti università degli studi di perugia – dipartimento di ingegneria civile e ambientale via g. duranti, 93 – 06125 perugia (italy) email: corcen unipg.it keywords: grass, geomorphology, river dynamics abstract the extreme versatility in different research fields of grass gis is well known. a tool for the vertical sorting of sediments in river dynamics analysis is illustrated in this work. in particular, a grass gis python module has been written which implements a forecasting sorting model by blom&parker (2006) to analyze river bed composition’s evolution in depth in terms of grainsize. the module takes a dem and information relative to the bed load transport composition as input. it works in two different and consecutive phases: the first one uses the grass capabilities in analyzing geometrical features of the river bed along a chosen river reach, the second phase is the "numerical" one and implements the forecasting model itself, then executes statistical analyses and draws graphs, by the means of matplotlib library. moreover, a specific procedure for the import of a laser scanner cloud of points is implemented, in case the raster dem map is not available. at the moment, the module has been applied using flumes data from saint anthony falls laboratory (minneapolis, mn) and some first results have been obtained, but the "testing" phase on other flume’s data is still in progress. moreover the module has been written for geoinformatics fce ctu 2011 39 minelli a. et al: a grass gis application for vertical sorting of sediments analysis in river dynamics grass 65 on a ubuntu linux machine, even if the debugging of a grass 64, windows version, is also in progress. the final aim of this work is the application of the model on natural rivers, but there are still some drawbacks. first of all the need of a high resolution dem in input, secondly the number and type of data in input (for example the bed load composition in volume fraction per each size considered) which is not easily obtainable, so the best solution is represented by testing the model on a well instrumented river reach to export in future the forecasting method to un-instrumented reaches. introduction the vertical sorting of sediments in gravel and sandy-gravel bed rivers is one of the factors directly involved in the evolution processes of river morphology [1]. in order to obtain a better understanding and modeling of the main physical processes occurring in vertical sorting and due to the large amount of field work behind grainsize analyses on a real river, some prediction methods for the study of vertical sorting of sediments have been formulated in time [2]; [3]; [4]; [5]; [6]; [7]. in this work, the equilibrium sorting model [8] has been implemented on the open source geographical informative system grass [9], and a specific tool, which can be integrated in the main gis structure, has been created. the specific choice to implement the theoretical model in gis has been realised in order to achieve the aim of computerizing the model prediction, simplifying the field work of sampling not only but also sieving and classifying a terrain. in addition to this an other main scope is represented by obtaining the connection between the predictive capabilities of the model to the gis capabilities in analyzing terrain morphology. in fact, the model application requires a priori an accurate knowledge and extraction of bedforms characteristics on the river bed surface, that grass gis can automatically provide. moreover, it is interesting testing the application of gis at microscopical scale (i.e. grainsize specific scale). first of all, starting from the hypothesis that, there is equilibrium in entrainment and deposition fluxes of sediments in steady conditions, a resolution method for the model has been formulated. starting from few geographical information (such as for example a digital elevation model) a geographical and geometrical analysis concerning bed form dimensions, is performed. secondly some statistical analyses are executed on the main geometrical features of the bedforms. once the geometry characteristics of the river bed and statistical distribution of the model’s parameters are analyzed, the prediction can be performed: the resolution method uses an iterative procedure to find the vertical grainsize composition values for each point and for each different value of depth (over the bed surface) examined. the code written (v.eqsm.py) is a python script for grass gis and uses python scientific, graphical and mathematical libraries. currently the module is in testing phase. the predictive processes have been verified step-by-step during the implementation and, along the following calibration phase, experimental data from the main channel of the saint antony falls laboratory (minneapolis, mn, usa) have been used [10]. this choice has been lead due to the necessity of calibrating the tool in a controlled environment with less external ingeoinformatics fce ctu 2011 40 minelli a. et al: a grass gis application for vertical sorting of sediments analysis in river dynamics fluences than possible. at present time, the model predictions are being verified using flume data although the module has to be applied on real river data as soon as possible, due to the necessity of having well developed bed forms to formulate a reasonable prediction on. currently the tool implementing the equilibrium sorting model takes in input, as required by the model, both data relative to grainsizes to formulate predictions on it and bedload composition for the grainsize examined. the tool generated as output the prediction in depth of the sediment sorting for one or more specific points investigated and for a specified number of layers in depth. the equilibrium sorting model in gis the equilibrium sorting model is a vertical sorting model of sediments formulated by a. blom and g. parker (2006) [11] as a steady simplification of the main framework for the vertical sorting phenomena modeling [12]. the model supposes the continuity of terrain in depth [13] in the way that not only a terrain "active layer" is involved in the entrainment and deposition processes, below the bed surface, but each grain, of a specific size, at a specific depth has a certain probability to be held by the current (fig. 1). moreover, the model is grainsize specific, in the way that different grainsizes can be examined: for this first implementation a bimodal mixture has been considered. in addition to this, the grass gis use, allows to analyze the river bed, considering the bedform presence along the river reach examined, as accurately as the elevation map resolution is and allows the user to apply the model starting by any geographical-referred data. deepening into the model, the evaluation of vertical sorting is done point-to-point along a river reach; it is possible to simplify the river reach profile as reported in fig. 2. figure 1: the real behavior of terrain erosion in depth (left) and simplified schema supposing erosion only on a strata with la thickness (right) – image courtesy of g. parker, c. paola and s. leclair. suppose to separate each bedform in an ascending (stoss) and a descending (lee) face, it is possible to hypothesize that on lee face only deposition occurs and on stoss face both entrainment and deposition occur (fig. 2). the equation depth specific, simulating the mass conservation through entrainment and deposition fluxes over an infinitesimal long reach segment in time changes as follows: ∂c̄i ∂t = cbp̄s ∂f̄i∂t + cbf̄i ∂p̄s ∂t = d̄ei − ēei where: • ci is the mass concentration of sediments of grainsize i; geoinformatics fce ctu 2011 41 minelli a. et al: a grass gis application for vertical sorting of sediments analysis in river dynamics figure 2: (on the left) the river reach simplified profile and some geometrical parameters describing dune profile: lengh of the dune and of stoss and lee face (λ), relative height of the dune (∆), total height of bottom of each dune and mean bed level (η); (on the right) entrainment and deposition rates on stoss and lee faces with solid discharge crossing the top of the dune (qtop) – image courtesy of a. blom and g. parker. • cb is the total mass concentration of sediments in the bed surface; • ps is the probability density function that a sediment can be held by the current; • fi is the volume fraction of sediments of grainsize i; • dei is the deposition rate of a specific grainsize; • eei is the entrainment rate of a specific grainsize. all terms are averaged over a series of bedforms (reach length). moreover the probability density function (ps) depends on the mean elevation of bed river, averaged over a series of bedforms (η̄a). if the framework is reduced at steady conditions and the equilibrium between entrainment and deposition rates is hypothesized, the prediction on lee faces is formulated and the only unknown variable is represented by fi: f ie(z) = f leelocie(z) = ftopie ∗ ∫ ηbmax ηbmin [ j(z) λ∆ ωie(z)p̃bedηb]∫ ηbmax ηbmin j(z) λ∆ p̃bedηb where: • f leelocie is the value of volume fraction of grainsize i sediments in the lee deposit (downstream the bedform); • ftopi is the volume fraction of sediments of grainsize i overtaking the bedform; • j is the heaviside step function; • ηbmin and ηbmax are the minimum and maximum value recorded for the bottom of each bedform in the river reach; • λ and ∆, as shown in fig. 2, are the length and drop of a single bedform; • ωi is the lee sorting function, able to predict the depth where we can find, in the lee deposit, the i-th grainsize; • p̃be is the probability density function for the bed surface not to be greater than a certain value, depends on the mean bed level. geoinformatics fce ctu 2011 42 minelli a. et al: a grass gis application for vertical sorting of sediments analysis in river dynamics these parameters depend both on the bedload composition (ftopi) and on geometrical features characterizing the river bed (e.g. λ, ∆ and η), where the first (bedload composition) is given in input to the module, while the second (geometry) is calculated by grass gis examining the dem. the solution method and consequently the procedure implemented (considering two different grainsizes) allows to solve iteratively a set of three equations involving both sediment transport and bedform geometrical parameters to find the "right values" for fi at each fixed depth value in each fixed point. the equations to be solved in sequence for each point examined are: 1. φmleee = ∑n i=1 φi∗f leeie – which gives as result the mean composition of the lee deposit of the bedform where the point, chosen to perform prediction, falls (φmleee); fleeie is the volume fraction of sediments of i-th grainsize in the lee deposit; 2. δie = −0.3 ∗ φmleee −φi√∑n i=1 f leeie(φi −φmleee)2 ∗ √√√√√ (ρs −ρ) ∗g1000 ∗ 2−φmleee τbe – which gives as result the values of lee sorting parameter (δie) for each grainsize examined, this parameter is involved in the definition of lee sorting function (ωie); ρ, ρs are the density of water and material (respectively), τbe is the bed shear stress (averaged on the river reach); 3. f leeie = faie(1 − δie 6 − δie∆a 6λatg(ν) )−1 ∑n i=1[faie(1 − δie 6 − δie∆a 6λatg(ν) )−1] j(z)(1 + δiez?) ∗ ∫ηbmax ηbmin j(z) λ∆ [j(z)(1 + δiez?)]p̃be ∗dηb∫ηbmax ηbmin j(z) λ∆ p̃be ∗dηb – which gives as result the volume fraction of elements of i-th grainsize in the lee deposit (fleeie) this value is converted subsequently in the corresponding fi value using the relation: fie = ωie ∗fleeie; faie is the volume fraction of sediments of i-th grainsize in bedload transport material, ν is the angle of repose of material, z? is the nondimensional height (relative to the mean height of the specific bedform) of the point where prediction has to be performed. the analysis starts hypothesizing an equal volume concentration for each grainsize examined (f1 = f2 = 50%) then, the three equations are solved and the values obtained for fi are compared to the hypothesized values. if values are not matching, for each volume fraction, the mean value between hypothesized and obtained is taken, another iteration is performed, a second set of values for fi are obtained and so on, until the values match. v.eqsm.py – workflow and gui: v.eqsm.py is the new grass module which allows to predict the vertical sorting of sediments for two different grainsizes (bimodal mixture), starting by dem and some bedload transport information. the module is written in python language and it is available for download at the following link: http://svn.osgeo.org/grass/grass-addons/vector/v.eqsm/. it uses scientific, graphical and mathematical python libraries i.e. scipy, math and pyplot. the module works as shown in fig. 3 diagram and it is described step by step as follows: geoinformatics fce ctu 2011 43 http://svn.osgeo.org/grass/grass-addons/vector/v.eqsm/ minelli a. et al: a grass gis application for vertical sorting of sediments analysis in river dynamics • input information are relative to: grainsizes to perform the prediction on, the digital elevation map of the river reach and about the current and the bedload transport (sediment concentration of studied grainsizes, and density of the bedload); • the first phase consists on calculation by the means of gis of geometrical features of the river reach for the entire reach, starting from dem as input data; • statistical analyses on geometrical parameters values are executed and graphical representation concerning how these values are distributed are produced; • grass gis individuates the lee faces (where the model can give a prediction) and there’s a interactive procedure to choose the point (or points) where to perform the prediction; • in the last phase, the iterative procedure is used to perform the prediction on selected point(s), the model; since a good convergency is found, results are consequently written in a text report, in addition to this graphs illustrate the vertical sorting in depth for the two grainsizes, and vector file of points is also provided where prediction is stored. the module graphical user interface is reported in figure 4. to provide the prediction in depth, it is possible to specify a maximum prediction depth ("dmax" parameter) and the number of steps to perform the analysis on ("nstep" parameter). by using this approach the user can control the accuracy of the analysis by defining a more or less continuous pattern for the prediction. moreover, if a raster map of elevation information is not available, the module can take in input a laser scanner cloud of points ("dem importation" section on the gui). figure 3: the v.eqsm.py working flowchart. geoinformatics fce ctu 2011 44 minelli a. et al: a grass gis application for vertical sorting of sediments analysis in river dynamics figure 4: the v.eqsm.py gui. v.eqsm.py – an experiment: the v.eqsm.py testing phase is still running for the following reasons: first of all it was necessary to test the module on trusted data, secondarily it was necessary to have data from a controlled environment, as it is the flume one, to have higher accuracy than the possible. the model works efficiently only if bedforms are well defined so, it was quite difficult to find laboratory data (laboratory with a flume sufficiently extended, where water conditions are similar to natural ones) which had these characteristics. at last, to export the model on real river, it is necessary to have high accuracy dem (or bathymetries) to identify and geometrically define bedforms characteristics. data from saint anthony falls laboratory (minneapolis, mn, usa) has been used for this work. particularly some experiments have been done to test the reliability of the module using the large amount of data in streamlab dataset (2006), which is freely accessible by the following url: https://repository.nced.umn.edu/browse.php?dataset_id=8. this dataset has been collected from march 2006 to november 2007 and it derives from a series of experiments executed in the main channel of the laboratory (dimensions: 84 x 2.75 x 1.8 m). additional runs have been executed with different purposes (e.g. testing sieving methods or biological parameters under specific conditions) and both bed sample composition in depth and recirculation sample composition are available. moreover laser scanner output geoinformatics fce ctu 2011 45 https://repository.nced.umn.edu/browse.php?dataset_id=8 minelli a. et al: a grass gis application for vertical sorting of sediments analysis in river dynamics are available and each sample or cloud of points have a temporal reference, consequently, some bed samples have been selected in the way that, for each bed sample (which represents the expected result – it can be used to verify the prediction) it is possible to employ the nearest (in time) recirculation sample and cloud of points as input data. data recorded in august 2006 have been used for this work. in particular, three samples from the main channel of streamlab 2006 dataset are available for testing v.eqsm.py (fig. 5). figure 5: digital elevation model of the main channel (unit is 1mm); in red, samples available for testing eqsm.py. sample #1 has been chosen. let us choose the sample #1; in fact it is placed in the middle of the channel (near the axis) where border effects are less. for this sample we only have the grainsize distribution of the bed surface, anyway, some testing values of grainsize which better represent than the possible the grainsize distribution of the sample are chosen; let us decide to test the grainsizes: φ1,φ2 = −3.41,−0.79. where φ values are obtained by particle diameter value from the formula: φi = −log2(di) for these specific grainsizes, we have to obtain the following volume fraction values for the bed surface: f1,f2 = 0.799, 0.211. then, the dem is imported and the concentration of chosen grainsizes is read in the recirculation sample. the aim of the experiment is to predict the grainsize fraction on the channel bed surface, starting from bedload composition informations and channel bed morphology. resuming, the input parameters are: • grainsize dimensions to investigate: φ1,φ2 = −3.41,−0.79; • bedload material density: 1744 n/mc; • average bed shear stress (averaged on the river reach length): 10.47 pa; • volume fraction (%) of chosen grainsizes (φ1,φ2) in the bedload: 0.14556, 0.70888; • maximum depth of prediction: 10 mm; • nr. of steps in depth to investigate: 5. the command syntax given from terminal is: eqsm.py dem=dem vector=l1 dmax=10 nstep=5 res=10 lee_out=lee_out \ point_out=point_out fione=-0.79428 fitwo=-3.41009 ros=1744 to=12.752526971 \ faone=0.659383856 fatwo=0.340616144 n_iter=20 txtout=texout graphout=graph other input data are dem, the vector file of the river reach axis (for this test a reach of 10 meters length has been considered) and the name for the vector file of points where prediction are stored, the text file of report where predictions are written and the name for graphics in output. geoinformatics fce ctu 2011 46 minelli a. et al: a grass gis application for vertical sorting of sediments analysis in river dynamics the module takes some minutes to work and then gives in output some graphs, reports and vector files (fig. 6): • the graphic of river reach profile (fig. 6a), reporting the concavity and convexity rates, respecting the mean bed level; • a graphic of delta parameter (total height of each bedform) raw and logarithmic distribution (fig. 6b and 6c); • the graphics of the probability density distribution (using the weibull distribution, fig. 6d and 6e) and the weibull plots (fig. 6f and 6g) for maximum and minimum height of each dune, respecting the mean bed level; • the graphic reporting the volume fraction for each grainsize in depth (fig. 6h); • an output text file reporting both an input data summary and the output of the prediction (fig. 6i); • the vector file of the lee faces along the entire river reach (fig. 6j): in each feature some information about the morphology of the lee face and the entire bedform are stored, e.g. the lee face length, the dune length and total height, the position of the dune respecting the mean bed level etc.; • the vector file of point where informations about the prediction are stored (fig. 6k). figure 6a: the river reach profile. from the graphic in figure 6a, reporting the reach profile extracted (using r.profile) from the dem in figure 5, it is clear that the profile is mostly convex, so the bed surface is often higher than the mean bed level, and downstream, near the end of the reach, there’s a ca. 8cm peak which is probably an instrumental recording error (laser scanner bad reflection), it is reasonable not to consider into the prediction that specific bedform. that peak is highlighted also in figure 6b, where there’s only one occurrence of a 8cm delta value, at anyway the frequency delta distribution remains logarithmic (a lot of low delta values) and the graph in geoinformatics fce ctu 2011 47 minelli a. et al: a grass gis application for vertical sorting of sediments analysis in river dynamics figure 6b: total height frequency values. figure 6c: total height logarithmic distribution values. fig. 6c fits well that distribution. the presence of low delta values is due to the not-so-well developed bedforms on the channel, and this is due to, on turn, the fact that the experiments collected in streamlab 2006 dataset were not finalized to this specific kind of tests. so more runs with higher water discharge values should be performed to obtain a better developed bed surface (with dunes instead of ripples – [14]; [15]; [16]). from graphs in figures 6d and 6e it is clear an almost perfect mirroring of etat and etab values, this means that the river profile is, for a high percentage, "average": where the top of the bedform (etat) lies over the mean bed level and the bottom (etab) lies below the mean bed level. in figures 6f and 6g the weibull plot is reported and the envelope fits correctly the extreme values of the parameters [17]. from the graphic reporting the prediction output (fig. 6h) we can evidence geoinformatics fce ctu 2011 48 minelli a. et al: a grass gis application for vertical sorting of sediments analysis in river dynamics figure 6d: maximum height (etat parameter) frequency values. figure 6e: minimum height (etat parameter) frequency values. an almost linear behavior of grainsize composition with depth. this result has been also found by blom (2008) [18]. figures 6j and 6k, show how results are stored in grass gis, the advantage to have these vector files in output is that it is always possible to export the results and to deal with the complete output of the grainsize prediction and morphological analysis. conclusions and future developements: comparing the values obtained from the forecasting to the ones expected (recorded in the dataset), we can see by the graphics, that there are very low differences. it is evident that geoinformatics fce ctu 2011 49 minelli a. et al: a grass gis application for vertical sorting of sediments analysis in river dynamics figure 6f: maximum height (etat parameter) weibull plot. figure 6g: minimum height (etab parameter) weibull plot. more accurate studies on similar cases have to be done, but these first results are encouraging in any case. it has to be said that it’s not easy to find data from appropriate structures as the main channel of safl, which is sufficiently large in the way that it has a behavior that can be compared, in some aspect, to a real river one and which allows to test efficiently the module, since there are few structures like that in the world and the river bed morphology has to be particular (bedforms have to be well evidenced and defined, no ripples). at anyway, using available data, a statistical analysis on river reach morphology and geometrical parameters distribution is performed. moreover the prediction results are given in three different formats (text file, graphics and vector files). in the end, the equilibrium sorting model and the module written (eqsm.py) can help researcher and worker in sampling and classification work geoinformatics fce ctu 2011 50 minelli a. et al: a grass gis application for vertical sorting of sediments analysis in river dynamics figure 6h: graphic of volume fraction distribution in depth for examined grainsizes. figure 6i: text file with resume of input data and prediction results. because, if we can apply this model to real rivers, a prediction of river bed composition is formulated in the way that the sampling work can be limited to the minimum and necessary. this is a particular field of application for gis, most of all because the scale of the analysis is very large and units are millimeters instead of meters (normal unit of measure for gis data). so the application to grainsize distribution in depth is quite original, but the results obtained are encouraging. some limitation is still present both in the module and in the model. for example, this model can predict the bed composition with maximum depth equal to the total bedform height because of the peculiar definition of some model parameters, consequently the model gives "right" prediction at higher depth, only if the bedforms are well developed; currently, this represents an open task for the model. by the other hand, the module examines only two grainsizes and it is written for grass 6.5 and linux systems only, even if a windows version is in debug phase. moreover, to use the module on real river we need some high resolution data e.g. the dem, which must have a resolution at least geoinformatics fce ctu 2011 51 minelli a. et al: a grass gis application for vertical sorting of sediments analysis in river dynamics figure 6j: in the grass monitor – the extracted lee faces vector file (in white) plotted over the entire studied reach (in black) and the dem (in green); in the output form of the query – geometrical informations stored in each lee face; in the bottom view – a 3d visualization by nviz of the dem and the studied river reach (in black). figure 6k: in the grass monitor – the vector points file (in yellow) where prediction is executed and results are stored, plotted over the lee faces (white) and reach (black) vector file; in the output form of the query – prediction results stored in the points vector file; in the bottom view – the river reach profile (in blue) and the vector points file (in yellow) where prediction is executed. comparable to that one of the elements investigated (bedform dimensions) and a so accurate kind of data is often difficult to find, unless specific and/or deep studies. geoinformatics fce ctu 2011 52 minelli a. et al: a grass gis application for vertical sorting of sediments analysis in river dynamics acknowledgements: this work has been mostly executed during a research period at the university of illinois (urbanachampaign, il) in collaboration with gary parker research group so, special thanks refers to dr. enrica viparelli, which supported and contributed to this work with really precious hints. furthermore, special thanks to dr. astrid blom, from delft (nl) university of technology, cohautor of the equilibrium sorting model, and to saint anthony falls laboratory research group (prof. efi foufoula-georgiou, dr. jeff marr, dr. arvind singh, dr. sarah johnson), which helped providing all the necessary data. references 1. ganti, v., m. meerschaert, e. foufoula-georgiou, e. viparelli, and g. parker, normal and anomalous dispersion of gravel tracer particles in rivers, j. geophys. res. – earth surface, 115, f00a12, doi:10.1029/2008jf001222, 2010 2. hirano, m. (1971), river bed degradation with armouring. transactions jap. soc. civ. eng., 3(2), 194-195. 3. bagnold, r. a. (1977), bed load transport by natural rivers, water resour. res., 13(2), 303–312, doi:10.1029/wr013i002p00303. 4. wilcock, p. r., and j. b. southard (1989), bed load transport of mixed size sediment: fractional transport rates, bed forms, and the development of a coarse bed surface layer, water resour. res., 25(7), 1629–1641, doi:10.1029/wr025i007p01629. 5. blom, a. and g. parker (2004), vertical sorting and the morphodynamics of bed formdominated rivers: a modeling framework. journal of geophysical research, 109. 6. viparelli, e., hydel, r., salvaro, m., wilcock, p. r. and g. parker (2010)a, river morphodynamics with creation/consumption of grain size stratigraphy 1: laboratory experiments, journal of hydraulic research, vol. 48, no. 6. (2010), pp. 715-726. doi:10.1080/00221686.2010.515383 7. viparelli, e., sequeiros o. e., cantelli a., wilcock, p. r. and g. parker (2010)b, river morphodynamics with creation/consumption of grain size stratigraphy 2: numerical model, journal of hydraulic research, vol. 48, no. 6. (2010), pp. 727-741. doi:10.1080/00221686.2010.526759 8. blom, a., parker, g., ribberink, j.s. and h.j. de vriend (2006), vertical sorting and the morphodynamics of bed form-dominated rivers: an equilibrium sorting model. journal of geophysical research, 111. 9. neteler, m. and h. mitášová (2007), open source gis: a grass gis approach, kluwer academic publishers group, isbn 1-4020-7088-8 10. marr, j. d. g., gray, j. r., davis, b. e., ellis, c. and s. johnson (2010), large-scale laboratory testing of bedload-monitoring technologies: overview of the streamlab06 experiments. u.s. geological survey scientific investigations report 2010-5091, (p. 266-282; 910 kb) geoinformatics fce ctu 2011 53 minelli a. et al: a grass gis application for vertical sorting of sediments analysis in river dynamics 11. blom, a., parker, g., ribberink, j.s. and h.j. de vriend (2006), vertical sorting and the morphodynamics of bed form-dominated rivers: an equilibrium sorting model. journal of geophysical research, 111. 12. blom, a. and g. parker (2004), vertical sorting and the morphodynamics of bed formdominated rivers: a modeling framework. journal of geophysical research, 109. 13. parker, g., paola, c. and s. leclair (2000), probabilistic exner sediment continuity equation for mixtures with no active layer. journal of hydraulic engineering, 818. 14. dinehart, r. l. (1989), dune migration in a steep, coarse-bedded stream, water resour. res., 25(5), 911–923, doi:10.1029/wr025i005p00911. 15. singh, a., k. fienberg, d. j. jerolmack, j. marr, and e. foufoula-georgiou, experimental evidence for statistical scaling and intermittency in sediment transport rates, j. geophys. res., 114, f01025, doi:10.1029/2007jf000963, 2009. 16. singh, a., porté-agel, f., and e. foufoula-georgiou, on the influence of gravel bed dynamics on velocity power spectra, water resour. res., 46, w04509, doi:10.1029/2009wr008190, 2010. 17. van der mark, c. f., a. blom, and s. j. m. h. hulscher (2008), quantification of variability in bedform geometry,j. geophys. res., 113, f03020, doi:10.1029/2007jf000940. 18. blom, a. (2008), different approaches to handling vertical and streamwise sorting in modeling river morphodynamics, water resour. res., 44, w03415, doi:10.1029/2006wr005474. geoinformatics fce ctu 2011 54 software used for diploma thesis at geoinformatics vsb-tuo jan růžička institute of geoinformatics faculty of mining and geology, vsb-tuo e-mail: jan.ruzicka@vsb.cz keywords: software, gis, open source, freeware, commercial, diploma thesis abstract the paper describes software usage for diploma thesis presented by vsb-tuo students during students conference gisáček. a prepared statistics was build from papers available at web pages of the conference. the prepared statistics is not complete clear view on this area, but i do not have any other simple way how to prepare such statistics. the statistics was build just from the text of the papers. if a student mentioned any software than it is included in the statistics. summarized results are presented at the following figures with some general comments, that can be useful. the statistics is prepared only for years 2000 – 2006. a last year of the conference was not included, because of problems with availability of the proceedings. software is categorized to three categories. a category fee contains only software that must be payed. this category includes software that can be obtained by students free of charge, but for firm usage must be payed. a category without fee includes software that can be used free of charge even for firm purposes, but is not available as an open source software. the last category open source includes open source software and in a case of this statistics, all open source software mentioned by students is free of charge as well. gis and cad software category gis and cad software includes all software that can be directly used for spatial data manipulation. there are not included systems such as mysql, oracle, gimp, statgraphics that can be used for spatial data manipulation, but they were not used as gis or cad software for the thesis. complete list of the gis and cad software that were mentioned by students follows: arcims, arcview ims, mapguide, jshape, pms, deegree, mapserver, arcinfo (arcgis), surfer argeinformatics fce ctu 2007 23 software used for diploma thesis at geoinformatics vsb-tuo cpad, topol, arcsde, erdas imagine, infomapa, mapobjects, kokeš, patch analyst, arcinfo (7.x), pc arc/info, flowmap, arcview, autocad, microstation, iras-c, geomedia, pathfinder office, cadfusion, kristýna gis, ocad, terra explorer, gdal, thuban, mapwindow, udig, ogr, qgis, gvsig, geotools, postgis, gpssim, openmap, jump, jgrapht, transform, nviz, vis5d+, trand’ák, fwtools, grass. ogr, gdal or other libraries were used as an individual software pieces not as a part of mapserver or grass software. figure 1: gis and cad software the figure 1 shows number of gis and cad software used during 2000 – 2006 period by students. we can see high increase of the number of open source and software without fee in the last two years. to be correct we must mention that for several tasks user in the open source area usually uses not only one software, but a set of software. in the area of commercial software this is not so common. for this purpose you can find figure 4 that shows software used by individual student. figure 2 shows number of web mapping software used during 2000 – 2006 period. we can see that there was free or open source software used every year when web mapping software was used. a complete list of the used web mapping software follows: arcims, arcview ims, mapguide, jshape, pms, deegree, mapserver. figure 4 shows how is gis and cad software used by individual student. there are three categories of students: students that use only fee software, students that use only free ware software (including open source) and students that use both kind of software. very interesting is popularity of esri software between students. although we use different software for teaching during the study, the first one software that is used is esri software (arcview in the past and arcgis nowadays). this is described at the figures 5, 6 and 7. figure 5 shows that other than esri commercial software for gis and cad is not generally used by our students. situation will not probably be better in the next years, because our university paid for site licence of the esri products. this license will be available for all geinformatics fce ctu 2007 24 software used for diploma thesis at geoinformatics vsb-tuo figure 2: web mapping software figure 3: gis and cad software without web mapping software students that are connected via vpn to the campus intranet. i feel this as a problem, that can be solved only by other gis and cad software vendors action via companies that can prepare diploma thesis for our students. figure 6 and 7 shows that our students are usually in face of decision: use esri or open source software. diploma thesis that are written by our students are usually prepared for some company (commercial, non-commercial). does it mean that for the practice are only these two options? i believe that not, and that the reason is a structure of our company partners specialization. very intersting is usage of individual software during whole period. at figure 8 you can see that more than 40% are arcinfo (from arcgis edition) and arcview 3.x, about 38% is geinformatics fce ctu 2007 25 software used for diploma thesis at geoinformatics vsb-tuo figure 4: gis and cad software by individual student figure 5: esri and other gis and cad fee software distributed between software with usage less than 2%. about 18% is divided between open source software mapserver, postgis, grass, jump and commercial software mapguide (before it has been open source), arcims. you can compare this individual software usage for 2000 – 2006 period with single year 2006 that is described at figure 9. arcinfo and arcview are still quite strong, but there is remarkable increase of other open source software. ogr and gdal in this case were used as an individual software pieces not as a part of mapserver or grass software. geinformatics fce ctu 2007 26 software used for diploma thesis at geoinformatics vsb-tuo figure 6: esri and other gis and cad open source and without fee software figure 7: esri and other gis and cad software dbms software students during the 2000 – 2006 period used following dbms (database management systems) software: oracle, ms access, ms excel, mysql, exist, postgresql, ms sql server (free edition). ms access and ms excel are not exactly the dbms, but they are generally used for data storage and manipulation. ms access that was mentioned 31 times (generally more than 80% of fee dbms) during whole period. this software is used for teaching in database systems subject. postgresql was mentioned only in a connection to postgis software. mysql was mentioned mainly in a connection with web mapping software. other software students during the 2000 – 2006 period used following other software: statgraphics, spss, transcat, j́ızńı řády, malováńı, hydrog, dok, hec-hms, aeolius, webcastle, casc2d, modflow, livecd, jgap, phpgraph, gipsy oasis, sarovar, gimp, virtualdub, apache, plone, zope, axis, tomcat, jboss. this is probably quite common that users of the open source software mention any software geinformatics fce ctu 2007 27 software used for diploma thesis at geoinformatics vsb-tuo figure 8: gis and cad software 2000 – 2006 figure 9: gis and cad software 2006 that they used. they usually mean this as an acknowledgements to open source software developers. users of the commercial software thank by payment. other software is mentioned geinformatics fce ctu 2007 28 software used for diploma thesis at geoinformatics vsb-tuo figure 10: dbms software figure 11: other software mainly in the last years, this can be cause of better preciseness of the participants. operating systems very interesting is usage of operating systems. there are only a few students that were completely satisfied with gnu/linux (unix – there was used only one unix system named irix) os only. most common are students satisfied with os ms windows only, but in a last years is quite common combining os ms windows with os linux. geinformatics fce ctu 2007 29 software used for diploma thesis at geoinformatics vsb-tuo figure 12: operating systems all software there are prepared figures 13, 14 and 15 for complete view on the statistics. figure 13: all software (without operating systems) programming languages students mentioned programming languages used for their diploma thesis. there are not so big surprises at the following figure. students can learn php, java and visual basic during the study. usage of the avenue language is only a reflection of arcview usage. vrml was (maybe still is) favorite modeling language of the former head of the institute of geoinformatics. geinformatics fce ctu 2007 30 software used for diploma thesis at geoinformatics vsb-tuo figure 14: all software (without os) by individual student figure 15: all software (without os) 2000 – 2006 few words at the end the statistic prepared for this paper is not representative, but shows some trends. students did not mention all software that they used for their diploma thesis and some of them did not mention any software. for the paper were checked 129 papers form students’ conference, 122 students did mention any software. this is the best that we can have for our statistics. we can read all thesis to get better statistic, but this will not be probably efficient. i can make a mistake and did not found all software mentioned during the statistics preparation, but i geinformatics fce ctu 2007 31 software used for diploma thesis at geoinformatics vsb-tuo figure 16: programming languages 2000 – 2006 believe that i got at least 95% of mentioned software. geinformatics fce ctu 2007 32 otevřený katastr svobodné internetové řešeńı pro prohĺıžeńı dat výměnného formátu katastru nemovitost́ı karel jedlička department of mathematics, geomatics section, faculty of applied sciences, university of west bohemia, smrcekzkma.zcu.cz, autor je podporován výzkumným záměrem mšm 4977751301. jan ježek department of mathematics, geomatics section, faculty of applied sciences, university of west bohemia, h.jezekzcentrum.cz. jǐŕı petrák jiripetrakzseznam.cz. kĺıčová slova: otevřený katastr, informace o parcele, list vlastnictv́ı, vf iskn, spi, sgi, postgis, umn map server, subversion. abstrakt otevřený katastr je projekt s ćılem vytvořeńı svobodného rozhrańı pro př́ıstup k dat̊um katastru nemovitost́ı. jeho realizace spoč́ıvá v tvorbě několika nástroj̊u, které jsou š́ıřeny pod licenćı gnu/gpl. prvńı sada nástroj̊u umožňuje importovat data z výměnného formátu katastru nemovitost́ı čr (vf iskn) do prostorové databáze postgis. daľśı nástroje slouž́ı pro publikaci dat katastru nemovitost́ı v prostřed́ı internetu pomoćı umn mapserveru. pro daľśı podporu vývoje nástroj̊u je připravováno nasazeńı nástroje subversion. úvod současná situace na zču odděleńı geomatiky na západočeské univerzitě se dlouhodobě zabývá základńımi datovými sadami, mezi které data katastru nemovitost́ı nepochybně patř́ı. kromě zde představovaného geinformatics fce ctu 2007 111 otevřený katastr svobodné internetové řešeńı pro prohĺıžeńı dat výměnného formátu katastru nemovitost́ı projektu pro otevřený katastr též spolupracujeme s firmou arcdata praha, s. r. o.1 na testováńı jejich nástroj̊u po prohĺıžeńı dat vf iskn. dále pracujeme na rozš́ı̌reńı těchto nástroj̊u o možnost spojeńı importu spi z vf iskn a připojeńı sgi z jiného exterńıho zdroje. projekt otevřený katastr otevřený katastr je projekt, který vycháźı z bakalářské práce novotného 2005 [4] a diplomových praćı orálka 2006 [6] a petráka 2007 [7] obhájených na západočeské univerzitě, katedře matematiky, odděleńı geomatiky. tento př́ıspěvek navazuje na článek jedličky a orálka 2006 [2], který popisuje již realizovanou tvorbu nástroj̊u pro import dat výměnného formátu katastru nemovitost́ı (vf iskn – je popsán v čúzk 2007 [1] do prostorové databáze postgis [8]. vlastńı př́ıspěvek je zaměřen na představeńı nástroj̊u (vyvinutých pro umn mapserver [9]), které po grafické identifikaci parcely v souboru geodetických informaćı (sgi) umožňuj́ı vypsat popisné informace ze souboru popisných informaćı (spi), konkrétně základńı informace o parcele a list vlastnictv́ı. následně představuje nástroj pro správu zdrojových kód̊u subversion (svn) [10], pod kterým je plánován daľśı vývoj obou výše zmiňovaných aplikaćı. výměnný formát iskn – zdroj dat výměnný formát iskn (vf iskn, někdy též vfk) je stručně popsán v petrákovi 2007 [7], př́ıpadně v jedlinském 2006 [3], který se danou problematikou, byt’ z jiného úhlu pohledu a v jiném software také zabýval. oficiálńı dokumentace vf iskn je samozřejmě popsána v čúzk 2007 [1]. důvodem, proč jsou uvedeny daľśı zdroje je to, že jsou bohatš́ı o názorné a vysvětluj́ıćı modely stěžejńıch část́ı databáze. v této kapitole bude představena pouze nejnutněǰśı část vf iskn, nutná k vysvětleńı datového pozad́ı vyv́ıjených nástroj̊u (viz daľśı kapitoly). vf iskn je textový soubor, který se skládá z úvodńıch informaćı obsažených v hlavičce (obsah, rozsah a aktuálnost dat) a následně z datových blok̊u. pojem datový blok odpov́ıdá pojmu tabulka v relačńı databázi. dále pracuje s pojmem skupina datových blok̊u (či blok (datových) blok̊u), což je již čistě virtuálńı seskupeńı datových blok̊u (tabulek) maj́ıćı mezi sebou nějaký vztah. detailńı část dokumentace vf iskn popisovaná v čúzk 2007 [1] je členěná právě po těchto skupinách. v textovém souboru s daty vf iskn (data.vfk) však nejsou nijak obsaženy. pro vybudováńı struktury prostorové databáze s informacemi o vlastnictv́ı v daném územı́ jsou d̊uležité zejména skupiny datových blok̊u zobrazené na obr. 1. poznámka: iskn a potažmo i výměnný formát obsahuje daľśı informace zejména o pr̊uběźıch ř́ızeńı v katastru nemovitost́ı, jejichž zpř́ıstupňováńı neńı v současné době předmětem projektu otevřeného katastru, proto nejsou zmiňovány. 1 http://www.arcdata.cz geinformatics fce ctu 2007 112 http://www.arcdata.cz otevřený katastr svobodné internetové řešeńı pro prohĺıžeńı dat výměnného formátu katastru nemovitost́ı obr. 1: skupiny datových blok̊u obsahuj́ıćıch data o geometrii digitálńı katastrálńı mapy (sgi) a vlastnických vztaźıch (spi). nástroje pro otevřený katastr import vf iskn do postgis v roce 2006 byl vyvinut nástroj pro import vf iskn do postgis, který z importuje datové bloky z vf iskn do tabulek v prostorové databázi postgresql a podle dokumentace čúzk 2007 [1] znovu buduje vybrané relačńı vztahy (obr. 2). nástroj buduje pouze relace potřebné k vybudováńı prostorové (konkrétně geometrické, nikoli topologické) reprezentace parcel a budov. obr. 2: vytvořené relace. převzato z orálka 2006 [6]. dále z datových blok̊u obsažených ve skupině pkmp (konkrétně sobr, sbp, sbm, hp, ob, op a dpm, vysvětleńı zkratek viz. předchoźı kapitola) vytvář́ı prostorovou reprezentaci a ukládá ji do prostorového rozš́ı̌reńı postgis (viz obr. 3) podle specifikaćı ogc2. ”vf iskn obsahuje bodová a liniová prostorová data. souřadnice všech bod̊u polohopisu (včetně lomových bod̊u liniových prvk̊u) jsou uloženy v jediné tabulce – sobr a s vlastńımi 2 http://www.opengeospatial.org geinformatics fce ctu 2007 113 http://www.opengeospatial.org otevřený katastr svobodné internetové řešeńı pro prohĺıžeńı dat výměnného formátu katastru nemovitost́ı obr. 3: budováńı prostorové reprezentace z datových blok̊u vf iskn. převzato z jedlinského 2006 [3]. prvky jsou svázány přes propojovaćı tabulku sbp. z obrázku 3 je patrné, že vf iskn nepouž́ıvá pro uložeńı prostorových dat prostorové datové typy, ale jednoduché č́ıselné datové typy. struktura jeho prostorové části je kv̊uli tomu složitěǰśı, než v př́ıpadě zp̊usob̊u běžných v gis. souřadnice každého bodu polohopisných prvk̊u ve vf iskn jsou uloženy jen jednou. naproti tomu, souřadnice bod̊u nepolohopisných liniových prvk̊u se v tabulce sbm opakuj́ı tolikrát, na kolika liníıch bod lež́ı. stejně tak je v gis obvyklé, že geometrie každého prvku je uložena odděleně, tzn. souřadnice jednoho bodu jsou v databázi uloženy tolikrát, kolik prvk̊u na něm (svým lomovým bodem) lež́ı, na př́ıklad shoduje-li se obvod budovy s hranicemi stavebńı parcely. když je potřeba přǐradit lomovým bod̊um polygonových nebo liniových prvk̊u atributy, tak jako v tabulce sobr, muśı se tyto body zkoṕırovat do samostatné bodové vrstvy a atributy uložit tam, jedlinský 2006 [3].” je nutno upozornit na to, že existuj́ı r̊uzné verze vf iskn, popisované pomoćı dodatk̊u v dokumentu čúzk 2007 [1]. popisovaný nástroj ve verzi z roku 2006 umožňuje importovat data z vf iskn verze 2.8. vizualizace katastrálńıch dat prostřednictv́ım mapového serveru v roce 2007 pokračovaly práce na projektu zejména vývojem nástroj̊u pro výpis údaj̊u ze souboru popisných informaćı v návaznosti na objekt vybraný v souboru geodetických informaćı, tedy vizualizovaný v mapovém okně. nástroj nejdř́ıve poskytuje základńı popisné informace o parcele (budově): katastrálńı územı́, č́ıslo parcely, č́ıslo budovy, výměra, druh pozemku a č́ıslo listu vlastnictv́ı. v daľśım kroku vyṕı̌se informace v rozsahu veřejné části konkrétńıho listu vlastnictv́ı. detailńı dokumentace vývoje této části projektu je podrobně popsána v petrákovi 2007 [7], v tomto př́ıspěvku budou dále nast́ıněny hlavńı body. nejdř́ıve bylo nutné provést aktualizaci nástroje pro import dat z vf iskn tak, aby dokázal geinformatics fce ctu 2007 114 otevřený katastr svobodné internetové řešeńı pro prohĺıžeńı dat výměnného formátu katastru nemovitost́ı importovat verzi 3 vfk, která byla mezit́ım vydána a popsána dodatkem v čúzk 2007 [1]. hlavně ale bylo nutné importńı nástroj výrazně rozš́ı̌rit tak, aby vybudoval i relace týkaj́ıćı se vlastnických vztah̊u. to se ukázalo jako kĺıčová část práce, protože právě dokumentace relaćı je výraznou slabinou dokumentu (čúzk 2007 [1]), který vf iskn popisuje. za velký př́ınos projektu lze považovat, že v rámci práce petráka 2007 [7] vznikla dokumentace formou era model̊u, která jinak veřejně k dispozici neńı (firmy, které vyv́ıjej́ı importńı nástroje pro vfk era modely nepublikuj́ı, i když je nepochybně pro své účely také musely prozkoumat). jedná se zejména o podrobné datové modely skupin blok̊u nemo (nemovitosti), vlst (vlastnictv́ı), jiných právńıch vztah̊u (jpvz), ale i vybraných tabulek skupiny ř́ızeńı (rize) a skupiny bonitńı d́ıly parcel (bdpa). daľśı datové modely potom zobrazuj́ı př́ımo vztahy mezi jednotlivými tabulkami nutné pro źıskáńı kompletńıch popisných informaćı o konkrétńı parcele/budově – tj. pro výpis listu vlastnictv́ı. na základě zmiňovaných era model̊u byly následně napsány php skripty, které popisné informace po prostorové identifikaci parcely (v sgi) vypisuj́ı souvisej́ıćı atributové informace (spi). poznámka: jako technologické pozad́ı je využ́ıváno databáze postgresql s prostorovým rozš́ı̌reńım postgis, umn mapserveru pro serverováńı prostorových dat a jazyka php a do něj vnořeného dotazovaćıho jazyka sql pro źıskáváńı a výpis atributových informaćı. jednotlivé technologie jsou podrobněji popsány v článku jedličky a orálka 2006 [2]. otevřený vývoj jedńım z d̊uležitých aspekt̊u obdobných projekt̊u realizovaných na odděleńı geomatiky je také možnost jejich daľśıho vývoje v budoucnu a návaznost a spolupráce s daľśımi pracemi. ćılem odděleńı je postupně budovat infrastrukturu zaměřenou na možnost kontinuálńıho a otevřeného vývoje software od semestrálńıch projekt̊u až po diplomové a disertačńı práce. nejvhodněǰśı př́ıstup pro naplněńı této skutečnosti představuje standardńı model většiny open source projekt̊u a použ́ıváńı s t́ım spojených nástroj̊u softwarového inženýrstv́ı. základńı nástroje � správa zdrojového kódu – kĺıčový nástroj pro vývoj každého software jsou produkty umožňuj́ıćı systematicky spravovat zdrojový kód. pro tento účel existuje řada komplexńıch systémů které umožňuj́ı spravovat př́ıstupová práva pro editaci, vytvářeńı větv́ı, sledováńı a historii změn v čase, ale předevš́ım přináš́ı možnost efektivńı spolupráce v́ıce programátor̊u na jednom projektu. zdrojový kód popisovaného projektu (stejně jako řada daľśıch praćı) je spravován pomoćı nástroje subversion [10]. tento př́ıstup jednak zjednodušuje samotný vývoj (pomoćı verzováńı), ale předevš́ım představuje platformu pro spolupráci širš́ıho kolektivu autor̊u. � wiki – v bĺızké budoucnosti je plánováno také vytvořeńı portálu (pomoćı technologie wiki), který bude určen pro rozvoj dokumentace k obdobným projekt̊um řešeným na odděleńı geomatiky, katedry matematiky, západočeské univerzity v plzni. úložǐstě zdrojového kódu popisovaného produktu je k dispozici na adrese: http://mapserver.zcu.cz/svn/otevrenykatastr/ geinformatics fce ctu 2007 115 http://mapserver.zcu.cz/svn/otevrenykatastr/ otevřený katastr svobodné internetové řešeńı pro prohĺıžeńı dat výměnného formátu katastru nemovitost́ı kód je publikován pod licenćı gpl3. závěr otevřený katastr je vyústěńım řady navazuj́ıćıch závěrečných praćı, obhajovaných na odděleńı geomatiky, katedry matematiky, západočeské univerzity v plzni. v posledńım roce byl impulsem k daľśımu vývoji projektu zájem komerčńı firmy o nasazeńı v praxi. jeden z autor̊u uvažuje o realizaci dodávek informačńıch systémů malým obćım tzv. na kĺıč. kĺıčovou komponentou takových systémů se ukazuje právě aplikace, poskytuj́ıćı informace o datech katastru nemovitost́ı. autoři uv́ıtaj́ı, pokud i daľśı zájemci budou stavět na myšlence otevřeného katastru dál. použité zdroje 1. čúzk. výměnný formát iskn v textovém tvaru4. [cit. 18. 9. 2007]. 2. jedlička, k.; orálek, j. prostorové rozhrańı informačńıho systému malé obce řešené v open source software5. in geoinformatics fce ctu . praha : čvut, 2006. s. 129-143. issn 1802-2669. 3. jedlinský, j. zp̊usoby uložeńı prostorových dat v databázi pro účely pozemkového datového modelu6 [diplomová práce]; ved. práce karel jedlička. – plzeň : západočeská univerzita. fakulta aplikovaných věd, 2006. – 58 s. 4. novotný, j. informačńı systém malé obce7 [bakalářská práce]; ved. práce otakar čerba. – plzeň : západočeská univerzita. fakulta aplikovaných věd, 2005. – 56 s. 5. ogc. open geospatial consortium, inc8. [cit. 10. 10. 2007]. 6. orálek, j. možnosti využit́ı nekomerčńıho geografického software pro tvorbu prostorového rozhrańı informačńıho systému malé obce9 [diplomová práce]; ved. práce karel jedlička. – plzeň : západočeská univerzita. fakulta aplikovaných věd, 2006. – 60 s. 7. petrák, j. open source mapový server pro data katastru nemovitost́ı10 [diplomová práce]; ved. práce karel jedlička. – plzeň : západočeská univerzita. fakulta aplikovaných věd, 2007. – 57 s. 3 http://www.gnu.org/copyleft/gpl.html 4 http://www.cuzk.cz/dokument.aspx?prareskod=10&menuid=10283&akce=doc:10-vf iskntext 5 http://geoinformatics.fsv.cvut.cz/wiki/index.php/prostorov%c3%a9 rozhran%c3%ad in \ forma%c4%8dn%c3%adho syst%c3%a9mu mal%c3%a9 obce %c5%99e%c5%a1en \ %c3%a9 v open source software 6 http://gis.zcu.cz/studium/dp/2006/jedlinsky zpusoby ulozeni prostorovych dat v \ databazi pro ucely pozemkoveho datoveho modelu dp.pdf 7 http://gis.zcu.cz/studium/zaverecneprace/2005/novotny informacnisystemmaleobce bp.pdf 8 http://www.opengeospatial.org 9 http://gis.zcu.cz/studium/zaverecneprace/2006/oralek moznosti vyuziti nekomercniho \ geografickeho software pro tvorbu prostoroveho rozhrani informacniho sy \ stemu male obce dp.pdf 10 http://gis.zcu.cz/studium/zaverecneprace/2007/petrak open source mapovy server p \ ro data kn dp.pdf geinformatics fce ctu 2007 116 http://www.gnu.org/copyleft/gpl.html http://www.cuzk.cz/dokument.aspx?prareskod=10&menuid=10283&akce=doc:10-vf_iskntext http://geoinformatics.fsv.cvut.cz/wiki/index.php/prostorov%c3%a9_rozhran%c3%ad_informa%c4%8dn%c3%adho_syst%c3%a9mu_mal%c3%a9_obce_%c5%99e%c5%a1en%c3%a9_v_open_source_software http://geoinformatics.fsv.cvut.cz/wiki/index.php/prostorov%c3%a9_rozhran%c3%ad_informa%c4%8dn%c3%adho_syst%c3%a9mu_mal%c3%a9_obce_%c5%99e%c5%a1en%c3%a9_v_open_source_software http://gis.zcu.cz/studium/dp/2006/jedlinsky__zpusoby_ulozeni_prostorovych_dat_v_databazi_pro_ucely_pozemkoveho_datoveho_modelu__dp.pdf http://gis.zcu.cz/studium/dp/2006/jedlinsky__zpusoby_ulozeni_prostorovych_dat_v_databazi_pro_ucely_pozemkoveho_datoveho_modelu__dp.pdf http://gis.zcu.cz/studium/zaverecneprace/2005/novotny_informacnisystemmaleobce_bp.pdf http://www.opengeospatial.org http://gis.zcu.cz/studium/zaverecneprace/2006/oralek__moznosti_vyuziti_nekomercniho_geografickeho_software_pro_tvorbu_prostoroveho_rozhrani_informacniho_systemu_male_obce__dp.pdf http://gis.zcu.cz/studium/zaverecneprace/2006/oralek__moznosti_vyuziti_nekomercniho_geografickeho_software_pro_tvorbu_prostoroveho_rozhrani_informacniho_systemu_male_obce__dp.pdf http://gis.zcu.cz/studium/zaverecneprace/2007/petrak__open_source_mapovy_server_pro_data_kn__dp.pdf otevřený katastr svobodné internetové řešeńı pro prohĺıžeńı dat výměnného formátu katastru nemovitost́ı 8. postgis. postgis documentation11. [cit. 18. 9. 2007]. 9. umn mapserver. umn mapserver documentation12. [cit. 18. 9. 2007]. 10. subversion. subversion project home page13. open source software engineering tools. [cit. 18. 9. 2007]. 11 http://postgis.refractions.net/documentation/ 12 http://mapserver.gis.umn.edu/docs 13 http://subversion.tigris.org/ geinformatics fce ctu 2007 117 http://postgis.refractions.net/documentation/ http://mapserver.gis.umn.edu/docs http://subversion.tigris.org/ geinformatics fce ctu 2007 118 freeware for gis and remote sensing lena halounová department of mapping and cartography, faculty of civil engineering czech technical university in prague halounova fsv.cvut.cz keywords: gis, freeware abstract education in remote sensing and gis is based on software utilization. the software needs to be installed in computer rooms with a certain number of licenses. the commercial software equipment is therefore financially demanding and not only for universities, but especially for students. internet research brings a long list of free software of various capabilities. the paper shows a present state of gis, image processing and remote sensing free software. introduction gis and remote sensing tasks are mutually interlinked and division of software to gis software and remote sensing software means that gis software is able to work with remote sensing data and to perform simple tasks with them. on the contrary, gis functions are embedded in remote sensing software. this duality is a result of close relation between these regions. looking for the gis free software, three groups of them can be found � viewers of commercial software – arcreader 91, arcexplorer 4.02 of esri, geomedia® viewer of intergraph, freelook 3.13 for envi, etc. � software tools for certain tasks as gdal 1.1.5 [11], libgeotiff, mb-system4 for bathymetry and backscatter imagery data derived from multibeam, interferometry, and sidescan sonars, mitab5 mitab a c++ library for reading and writing mapinfo .tab (binary) and .mif/mid files. � complete gis software – grass, ilwis, fmaps covering gis together with remote sensing. 1 http://www.arcdata.cz/download/arcreader/arcreaderwebsetup.exe 2 http://www.arcdata.cz/download/arcexplorer/ae4javasetup.exe 3 http://www.envi-sw.com/ 4 http://www.ldeo.columbia.edu/mb-system/ 5 http://mitab.maptools.org/ geinformatics fce ctu 2007 53 http://www.arcdata.cz/download/arcreader/arcreaderwebsetup.exe http://www.arcdata.cz/download/arcexplorer/ae4javasetup.exe http://www.envi-sw.com/ http://www.ldeo.columbia.edu/mb-system/ http://mitab.maptools.org/ freeware for gis and remote sensing � software for image processing – intel, image analyzer 1.27 the open source gis [7] brings a list of more than 150 free gis software available for users. the web page remotesensing.org [15] is a source of many free software for remote sensing including the link to remote sensing tutorial prepared by william j. campbell from nasa [10]. viewers of commercial software following examples shows limits of viewers for common users. geomedia extregistered viewer [13]. key features of the software are data access providing read-only access to geospatial data in microsoft access, mapinfo and shapefile formats. with geomedia viewer new connections to the above formats can be also created if you have other data sets you want to access. geomedia viewer let us create new geoworkspaces and save any changes made to existing geoworkspaces. geomedia viewer includes a powerful set of navigation commands for moving around and exploring your data as pan, zooming, etc. and supports multiple windows including map and data windows. arcexplorer [9]. as a complete data explorer, arcexplorer – java lets display and query a wide variety of standard data sources. using arcexplorer as a stand-alone desktop application, it is possible to consume shapefiles, a variety of images, arcsde layers, and more, allows to also pan and zoom through these map layers and identify, locate, and query their spatial and attribute information. you can buffer a selected set of features and view their attributes. arcexplorer also provides the ability to thematically map your data, symbolizing various features based upon information found in the data’s attribute table, displaying graduated symbology, or classbreaks rendering for instance. software tools for certain tasks there are many software packages for large number of different tasks and questions for gis purposes [6]. project conversion software form an important group of them. gdal 1.1.5 [11] is a translator library for raster geospatial data formats. as a library, it presents a single abstract data model to the calling application for all supported formats. a initial skeleton of gdal has formed, and operates for a few formats [11]. geotiff libs [16] – geotiff represents an effort by over 160 different remote sensing, gis, cartographic, and surveying related companies and organizations to establish a tiff6 based interchange format for georeferenced raster imagery [16]. proj 4.4.3 [8] – this package offers command line tools and a library for performing respective forward and inverse transformation of cartographic data to or from cartesian data with a wide range of selectable projection functions. o.s. gnu/linux and other unices windows [8]. 6 http://www.awaresystems.be/imaging/tiff/faq.html geinformatics fce ctu 2007 54 http://www.awaresystems.be/imaging/tiff/faq.html freeware for gis and remote sensing virtual terrain project [17] – the goal of vtp is to foster the creation of tools for easily constructing any part of the real world in interactive, 3d digital form. this goal will require a synergetic convergence of the fields of cad, gis, visual simulation, surveying and remote sensing. vtp gathers information and tracks progress in areas such as procedural scene construction, feature extraction, and rendering algorithms. vtp writes and supports a set of software tools, including an interactive runtime environment (vtp enviro). the tools and their source code are freely shared7 to help accelerate the adoption and development of the necessary technologies [15]. complete gis/remote sensing software there are four useful gis and remote sensing free software packages – grass, ilwis, ossim and fmaps. grass – geographic resources analysis support system commonly referred to as grass, this is a geographic information system (gis) used for geospatial data management and analysis, image processing, graphics/maps production, spatial modeling, and visualization. grass is currently used in academic and commercial settings around the world, as well as by many governmental agencies and environmental consulting companies. grass is a raster gis. there are a large range of applications of grass: geography, landscape ecology, epidemiology, remote sensing8, urban planning, biology, geophysics, hydrology, groundwater flow modeling (grass and modflow9), vector network analysis (in part of grass 610), geostatistics11, raster 3d volume12 (voxel) [5]. ilwis [1] version 3.4. is a software created for windows platform and the following list shows the product tools: � integrated raster and vector design � import and export of widely used data formats � on-screen and tablet digitizing � comprehensive set of image processing tools � orthophoto, image georeferencing, transformation and mosaicing � advanced modeling and spatial data analysis � 3d visualization with interactive editing for optimal view findings � rich projection and coordinate system library � geo-statistical analyses, with kriging for improved interpolation � production and visualization of stereo image pairs 7 http://www.vterrain.org/site/privacy.html#license 8 http://mpa.itc.it/rs/ 9 http://www.valledemexico.ambitiouslemon.com/gwmodelling.html 10 http://grass.itc.it/grass61/screenshots/network.php 11 http://grass.itc.it/statsgrass/index.php 12 http://grass.itc.it/grid3d/ geinformatics fce ctu 2007 55 http://www.vterrain.org/site/privacy.html#license http://mpa.itc.it/rs/ http://www.valledemexico.ambitiouslemon.com/gwmodelling.html http://grass.itc.it/grass61/screenshots/network.php http://grass.itc.it/statsgrass/index.php http://grass.itc.it/grid3d/ freeware for gis and remote sensing � spatial multiple criteria evaluation 52°north initiative for geospatial open source software gmbh is an international research and development company whose mission is to promote the conception, development and application of free open source geo-software for research, education, training and practical use. 52°north backs an open initiative, which is driven by leading research organizations and individuals in the international gis field. the work of all partners results in a collection of java based web services implementations [1]. fmaps [5] is an open source gis/rs (geographic information system/ remote sensing) application on the linux13 and gnome14 compatible platforms. the database engine is postgresql15. postgresql is an opensource sql server. ossim (open source software image map) [14] is a high performance software system for remote sensing, image processing, geographical information systems and photogrammetry. it is an open source software project maintained at http://www.ossim.org and has been under active development since 1996. the lead developers for the project have years of experience in commercial and government remote sensing systems and applications. ossim has been funded by several us government agencies in the intelligence and defense community and the technology is currently deployed in research and operational sites. the name ossim is a contrived acronym (open source software image map) that is pronounced “awesome” – the acronym was established by our first government customer. designed as a series of high performance software libraries it is written in c++ employing the latest techniques in object oriented software design. a number of command line utilities, gui tools and applications, and integrated systems have been implemented with the baseline. many of those tools and applications are included with the software releases. multispec is a freeware multispectral image data analysis system (latest release: 5-12-2007) multispec is being developed at purdue university16, west lafayette17, in18, by david landgrebe19 and larry biehl20 from the school of electrical and computer engineering21, itap22 and lars23. it results from an on-going multiyear research effort which is intended to define robust and fundamentally based technology for analyzing multispectral and hyperspectral image data, and to transfer this technology to the user community in as rapid a manner as possible. the results of the research are implemented into multispec and made available to the user community via the download pages. multispec© with its documentation© is distributed without charge. 13 http://www.linux.org/ 14 http://www.gnome.org/ 15 http://www.postgresql.org/ 16 http://www.purdue.edu/ 17 http://www.wintek.com/wlaf/ 18 http://www.state.in.us/ 19 http://dynamo.ecn.purdue.edu/~landgreb/ 20 http://dynamo.ecn.purdue.edu/~biehl/ 21 http://www.purdue.edu/ece 22 http://www.itap.purdue.edu/ 23 http://www.lars.purdue.edu/ geinformatics fce ctu 2007 56 http://www.linux.org/ http://www.gnome.org/ http://www.postgresql.org/ http://www.postgresql.org/ http://www.ossim.org http://www.purdue.edu/ http://www.wintek.com/wlaf/ http://www.state.in.us/ http://dynamo.ecn.purdue.edu/~landgreb/ http://dynamo.ecn.purdue.edu/~landgreb/ http://dynamo.ecn.purdue.edu/~biehl/ http://www.purdue.edu/ece http://www.itap.purdue.edu/ http://www.lars.purdue.edu/ freeware for gis and remote sensing radar interferometry interferometry is a separate part of remote sensing working with radar image pairs for determination of dem on one side, and small surface movements – subsidences, e.g., on the other side. it is a separate software and branch as well because its processing method differs significantly from other remote sensing image data evaluation. there is one freeware called doris [11]. doris (delft object-oriented radar interferometric software) the delft institute of earth observation and space systems24 of delft university of technology25 developed processor for interferometric sar. it is a freeware for non-commercial usage. the software works with european ers and envisat data, japanese jers, and canadian radarsat. image processing software intel offers open source computer vision library with following library areas [12]: � image functions: creation, allocation, destruction of images. fast pixel access macros. � data structures: static types and dynamic storage � contour processing: finding, displaying, manipulation, and simplification of image contours � geometry: line and ellipse fitting. convex hull. contour analysis � features: 1st & 2nd image derivatives. lines: canny, hough. corners: finding, tracking. � image statistics: in region of interest: count, mean, std, min, max, norm, moments, hu moments � image pyramids: power of 2. color/texture segmentation � morphology: erode, dilate, open, close. gradient, top-hat, black-hat � and many others intel extregistered image processing library (included in opencv winos download) comprises [12]: � image creation and access � image arithmetic and logic operations � image filtering � linear image transformation � image morphology 24 http://www.deos.tudelft.nl/ 25 http://www.tudelft.nl/live/pagina.jsp?id=b226846d-f19f-4c34-97ed-165fecc5ad8f&lang=nl geinformatics fce ctu 2007 57 http://www.deos.tudelft.nl/ http://www.deos.tudelft.nl/ http://www.tudelft.nl/live/pagina.jsp?id=b226846d-f19f-4c34-97ed-165fecc5ad8f&lang=nl freeware for gis and remote sensing � color space conversion � image histogram and thresholding � geometric transformation (zoom-decimate, rotate, mirror, shear, warp, perspective transform, affine transform the intel software covers wide range of image processing methods applicable in remote sensing and other branches. the largest user group can be found among digital photographs users. image analyzer 1.27. [6] is a freeware for windows 98/me/2000/xp/vista. advanced image editing, enhancement and analysis software. the program contains both most image enhancement features found in conventional image editors plus a number of advanced features not even available in professional photo suites as: � automatic brightness, contrast, gamma and saturation adjustment � build-in conventional and adaptive filters for noise reduction, edge extraction etc. � retouch tools � deconvolution for out-of-focus and motion blur compensation (see below) � easy red-eye removal � user specified filters in spatial and frequency domain � resize, rotate, crop and warping of images � scanner, camera and printer support � file format support: read/write bmp, ico, cur, wmf, emf, png, mng, gif, pcx, jpeg and jpeg 2000 images � morphological operations � color model conversion: rgb, cmy, hsi, lab, ycbcr, yiq and pca � distance, fourier and discrete cosine transformation � math expression module for creating and transforming images and advanced ”pocket” calculator with equation solver � plugin system for adding more specialized features. see below for available plugins conclusion there are more viewers belonging to the software group. viewers are tools for ready data and users with low level of demands and are prepared for large society of users and created by commercial gis/remote sensing companies. they are in fact useless for most processing of advanced users and students. the software packages focused on certain tasks are useful for solution of individual problems which cannot be processed in other software on one side or for solution of stand alone tasks. their application for education of gis or remote sensing as a whole is not suitable. the university education of gis and remote sensing can be based on complete software like grass and others. the open software allows to use existing modules geinformatics fce ctu 2007 58 freeware for gis and remote sensing on one side, and are a place for development of new tools on the other side. therefore, whenever the education is focused on advanced experienced students, these software packages are the best solution for education. references all web pages cited on 18 september 2007 1. http://52north.org/index.php?option=com projects&task=showproject&id=30 2. http://cobweb.ecn.purdue.edu/˜biehl/multispec/ 3. http://enterprise.lr.tudelft.nl/doris/ 4. http://fmaps.sourceforge.net/ 5. http://grass.itc.it/ 6. http://meesoft.logicnet.dk/analyzer/ 7. http://opensourcegis.org/ 8. http://proj.maptools.org/ 9. http://www.esri.com/software/arcexplorer/about/overview.html 10. http://www.fas.org/irp/imint/docs/rst/front/tofc.html 11. http://www.gdal.org/index.html 12. http://www.intel.com/technology/computing/opencv/overview.htm 13. http://www.intergraph.com/gviewer/key features.asp 14. http://www.ossim.org/ossim/ossimhome.html 15. http://www.remotesensing.org/home.html 16. http://www.remotesensing.org/geotiff/geotiff.html 17. http://www.vterrain.org/ the paper was prepared in the framework of the project of the czech grant agency ga čr 205/06/1037 application of geoinformation technologies for improvement of rainfall-runoff relationships geinformatics fce ctu 2007 59 http://52north.org/index.php?option=com_projects&task=showproject&id=30&itemid=127 http://cobweb.ecn.purdue.edu/~biehl/multispec/ http://enterprise.lr.tudelft.nl/doris/ http://fmaps.sourceforge.net/ http://grass.itc.it/ http://meesoft.logicnet.dk/analyzer/ http://opensourcegis.org/ http://proj.maptools.org/ http://www.esri.com/software/arcexplorer/about/overview.html http://www.fas.org/irp/imint/docs/rst/front/tofc.html http://www.gdal.org/index.html http://www.intel.com/technology/computing/opencv/overview.htm http://www.intergraph.com/gviewer/key_features.asp http://www.ossim.org/ossim/ossimhome.html http://www.remotesensing.org/home.html http://www.remotesensing.org/geotiff/geotiff.html http://www.vterrain.org/ geinformatics fce ctu 2007 60 geoinformatics study at the ctu in prague geoinformatics study at the ctu in prague prof. leoš mervart department of advanced geodesy faculty of civil engineering, ctu in prague e-mail: mervart@fsv.cvut.cz prof. aleš čepek department of mapping and cartography faculty of civil engineering, ctu in prague e-mail: cepek@fsv.cvut.cz key words: key education, curricula, geoinformation, software development abstract at the ctu in prague, there is a long tradition of master degree courses in geodesy, geodetic surveying and cartography. taking into account the fast development of information technologies in recent decades, we decided to prepare a new study program that would combine computer science with a background of geodetic and cartographic know-how. apart from other sources, our plans were inspired and influenced by the review of education needs, a report prepared by stig enemark (prague 1998), and by our experience from several virtual academy workshops. we have decided to call this program “geoinformatics” to emphasize the role of computer technologies in collecting, analyzing and exploiting information about our planet. within this presentation we will explain the basic ideas behind our new study program and emphasize the features that distinguish it from classical geodetic or cartographic programs. we will mention the connection between our new study program and several geodetic and software projects running at our institute – software development for real-time gps applications, cooperation with the astronomical institute, university of berne, on the development of so-called bernese gps software, the gnu project gama for adjustment of geodetic networks, etc. what’s in a name? what’s in a name? that which we call a rose by any other word would smell as sweet. – romeo and juliet (act ii, scene ii) we dare to disagree with the great poet. the name of a study program can be important for students finishing their high school and deciding which university they want to apply to. let us make a tiny linguist digression. according to [1], geodesy is the scientific discipline that deals with measurement and representation of the earth, its gravitational field and geodynamic phenomena (polar motion, earth tides, and crustal motion) in three-dimensional time varying space. the second branch of our traditional study programs, cartography, is (according to [1] again) the study and practice of making maps or globes. we find these definitions good, geinformatics fce ctu 2006 4 geoinformatics study at the ctu in prague but do they reveal that computer science and informatics nowadays play a key role in our discipline? this question can be important for young people deciding about the direction of their future professional career. a new word, geomatics, was apparently coined by b. dubuisson in 1969. it is the discipline of gathering, storing, processing, and delivering geographic information. this broad term applies to both science and technology, and integrates several specific disciplines (including geodesy, cartography, and, last but not least, geographic information systems). we were tempted to call our study program “geomatics”, however, at the end we voted for another new word – geoinformatics. informatics (or information science) is studied as a branch of computer science and information technology and is related to database, ontology and software engineering. it is primarily concerned with the structure, creation, management, storage, retrieval, dissemination and transfer of information. we understand “geoinformatics” as a science that synthetizes the achievements of informatics with knowledge of the principles of geodesy and cartography. in the geodetic courses we will teach our students the mathematical and physical backgrounds of geodesy as well as the practise of surveying – the techniques of gathering and processing measurements. within the study of geoinformatics we will teach our students the theoretical principles of geodesy and many things about computers and information technologies. our projects – geodesy and computer science in concordance protest my ears were never better fed with such delightful pleasing harmony. – pericles, prince of tyre (act ii, scene v) we have studied geodesy and we like sitting at computers writing our own applications. can we bring these two things into concordance? we are deeply convinced that we can. we will present some of our projects to demonstrate the interrelations between geodesy and informatics. real-time monitoring of gps networks the first project is related to our work for the company gps solutions, boulder, usa. within the contract between gps solutions and the japanese geographical survey institute (gsi) we take part in the development of a program system for real-time processing of gps data with the highest possible accuracy [8]. together with our american colleagues we have prepared a software system consisting of a server that collects data from many gps receivers and the rtnet (real-time network) processing program, which computes very accurate positions of gps stations in real time. the system is primarily designed for the real-time processing of data stemming from the japanese network geonet (gps earth observation network) – a unique network consisting of 1200 permanent gps stations. one of the main purposes of geonet is to monitor seismic deformations. understanding the character of seismic waves and the laws of their propagation may help in the design of earthquake-resistant buildings or even the establishment of an alert system that could save human lives. the left-hand side geinformatics fce ctu 2006 5 geoinformatics study at the ctu in prague of the following figure shows a map of several geonet stations located on the southern coast of hokkaido island. the right-hand side shows the motion of these stations during the tokachi-oki earthquake on september 26th, 2003, computed, by our rtnet software. figure 1: real-time monitoring of gps networks we find it fascinating to see the huge seismic shocks revealed by gps measurements. the plot clearly shows the propagation of seismic waves – stations closer to the epicenter sense the waves earlier than the more distant stations. the time delay between the so-called primary and secondary seismic waves can be observed by comparing the horizontal and vertical components of the station motions. bernese gps software we are very proud to have the opportunity to take part in the development of the so-called bernese gps software. this software package has been developed at the astronomical institute, university of berne, switzerland since the 1980s. it is used at many institutions round the globe for post-processing gps data with the highest accuracy and for various other purposes – the software is capable to estimate a large number of different kinds of parameters: station coordinates, earth rotation parameters, satellite orbits, parameters of the atmosphere, etc. the software is recognized for the quality of its mathematical model, which ensures the accuracy of the results. it is the know-how of geodesy and celestial mechanics that stands behind the software’s success. however, we are convinced that the technical quality of the program, its availability on different computer platforms, and a given level of user-friendliness are of major importance, too. the figure above shows a window of the bernese menu system. the bernese gps software is an example of the concordance between geodesy and informatics. it is becoming usual novadays that many mathematical achievements find their “materialization” in sophisticated software projects. geinformatics fce ctu 2006 6 geoinformatics study at the ctu in prague figure 2: bernese gps software gnu gama and other free software projects talking about software development, free software, a specific software area, is one of the fields that we want our students to get involved in. in order to learn more about the phenomenon of free software, selected essays of richard m. stallman are probably the best starting point (or one can read the interview [7]). our first major free software project gnu gama is dedicated to adjustment of geodetic networks. it has been presented at various fig meetings ([3], [4]), so we need not describe it in detail here. the beginning of the gama project was influenced by our experience from virtual academy meetings, where it was first presented as a project aimed at motivating our students to get involved in software development and international collaboration. another example of our free software projects was presented last year at the fig working week in athens [5]. the software part of this project was the gps observation database written by jan pytel in close collaboration with prof. kostelecky, and this year we will extend our collaboration to a project to adjust combined solutions from various observation techniques (gps, vlbi, etc). we believe that the new courses on geoinformatics, with an intensive focus on the theoretical background, will help us to attract more talented students, who will be able to collaborate on software projects of a scientific nature, as described above. the future of geodetic science to-morrow, and to-morrow, and to-morrow, creeps in this petty pace from day to day. – macbeth (act v, scene v) geinformatics fce ctu 2006 7 geoinformatics study at the ctu in prague from macbeth’s point of view, time seems to have passed slowly. nowadays we know the relativistic effects: for people planning the future of their educational facilities time may run faster than they wish. our present-day today’s knowledge may appear insufficient for tomorrow’s needs. how should we deal with this situation? what knowledge will our students need in several years after they have graduated? what should we teach them? this is not an easy question. taking into account our inability to estimate the precise needs of the future, we are convinced that we have to concentrate on teaching methods, ways of thinking, and general theories rather then specialized topics. our students should primarily be able to gather and analyze information. and this is actually the bottom line of geoinformatics. without knowledge of our primary science – geodesy – the most breathtaking informatics achievements are useless for us. but, conversely, our discipline (like any modern science) cannot develop without sophisticated information processing. it cannot live without informatics. from the practical point of view we had, first of all, to follow the framework of bachelor/master degree programs at the faculty of civil engineering, where bachelor programs last 8 semesters and master degree programs last three semesters. to distinguish clearly between our bachelors and masters, we decided that bachelors should typically be professional users of geoinformatic systems, in contrast to masters, who should become developers, analysists and leading managers of geoinformation systems. one of the major information systems in the czech republic is the cadastral information system. when preparing the bachelor curricula, we decided that education in the cadastre should be given in the same full level as in the existing study branch in geodesy and cartography. another strategic decision was that our bachelors should be prepared for managerial skills, and thus we put substantial emphasis on education in the social sciences. the definition of the structure of the new social science courses was fully in the competence of the head of our department of social sciences, doc. václav lǐska. with the focus on our main strategic priorities � mathematics � social sciences and management � geodesy � applied informatics the courses offered for the bachelor degree program are summarized in the following table table 1: bachelor degree study program course sem. course sem. calculus 1 1 calculus 2 2 technical geodesy 1 constructive geometry 2 os linux 1 foreign language 2 foreign language 1 technical geodesy 2 introduction to law 1 database systems 2 physics 1 1 rhetoric 2 introduction to numerical mathematics 1 physics 2 geinformatics fce ctu 2006 8 geoinformatics study at the ctu in prague calculus 3 3 calculus 4 4 introduction to economics 3 psychology and sociology 4 theoretical geodesy 1 3 theoretical geodesy 2 4 theory of errors and adjustment 1 3 theory of errors and adjustment 2 4 programming language c++ 3 project informatics 4 environmental engineering 3 mathematical cartography 4 practical training in surveying 3 foreign language 4 photogrammetry and remote sensing 1 5 photogrammetry and remote sensing 2 6 probability and math. statistics 5 cadastre 6 gis 1 5 gis 2 6 mapping 5 www services 6 project – geodesy 5 engineering geodesy 6 real estate law 5 electives (4 credits) 6 management psychology 5 optimization methods 7 elective courses (16 credits) 8 information systems 7 bachelor diploma work 8 image data processing 7 topograph.and thematical cartography 7 ethics 7 elective courses (4 credits) 7 pre-diploma project 7 each bachelor student is required to choose from 25 elective subjects starting from the sixth semester. the courses offered for the master degree program in geoinformatics are table 2: master degree study program course sem. course sem. graph theory 1 numerical mathematics 2 object-oriented programming 1 project-statistics 2 statistics-robust methods 1 project-professional specialization 2 elective courses (12 credits) 1 elective courses (14 credits) 2 social sciences and management 3 elective courses (4 credits) 3 diploma project 3 geinformatics fce ctu 2006 9 geoinformatics study at the ctu in prague the compulsory courses in the master degree program are accompanied by the offer of 36 elective courses ranging from tensor calculus and mathematical modeling to space and physical geodesy or combinatorial optimization. we expect most of our bachelors to continue their studies at master degree level. our program is designed on the basis of this presumption. however, our goal is also to open the new study branch to other fields, namely to geaduates of other bachelor programs at our faculty, e.g., environmental engineering or the existing study branch geodesy and cartography geoinformatics geodesy and cartography geoinformatics → environmental engineering (or water engineering and water structures) system engineering in the building industry (it) in june 2005 the czech ministry of education, youth and sports approved our new curricula, and the first semester in geoinformatics will be opened in 2006. the first semester of the master degree program in geoinformatics will open in 2007. one of our next steps will be to fully harmonize the first two years of the programs in geodesy and cartography with the new geoinformatics study plan. references 1. vańıček p., krakiwski e.j.: geodesy: the concepts, north-holland, 1986, 2nd ed., isbn 0444-87775-4 2. enemark s.: review of education needs, consultancy to the czech office for surveying, mapping and cadastre, eu phare land registration project, project no cz 94402-02, february 1998 3. čepek a., pytel j.: free software – an inspiration for a virtual academy, fig xxii international congress, washington, d.c. usa, april 19-26, 2002 4. čepek a., pytel j.: acyclic visitor pattern in formulation of mathematical model, fig working week, athens,greece, may 22-27, 2004 5. kostelecký j., pytel j.: modern geodetic control in the czech republic based on densification of euref network, fig working week, athens, greece, may 22-27, 2004 6. stallman r. m.: free software, free society (selected essays), gnu press, free software foundation, boston, ma usa, isbn 1-882114-98-1 7. stallman r. (interview)1 8. rocken, c., mervart l., lukeš z., johnson j., kanzaki m.: testing a new network rtk software system, in: proceedings of gnss 2004. institute of navigation, fairfax, 2004, 2831-2839 1 http://kerneltrap.org/node/4484 geinformatics fce ctu 2006 10 http://kerneltrap.org/node/4484 geoinformatics study at the ctu in prague 9. archive of all accreditation documents2 submitted to the czech ministry of education, youth and sports. 2 http://geoinformatika.fsv.cvut.cz/akreditace/ geinformatics fce ctu 2006 11 http://geoinformatika.fsv.cvut.cz/akreditace/ development of os tools that support the ccog resolution development of open source tools that support the ccog resolution jim sutherland key words: ccog, national georeferencing initiative, requirements, infrastructure national geo-referencing initiative the canadian council on geomatics (ccog) passed a resolution in march 2005 whereby each province/territory agreed to develop and implement a plan to require geo-referencing new legal surveys according to the principles and standards set out in the resolution. michael o’sullivan, former surveyor general of canada, initiated the resolution which took over five years of research, development, analysis and discussion before being passed. the resolution sets out the responsibility to each province/territory to define three georeferencing zones. the zones are to be defined as a function of parcel location and land use. the resolution also sets out the required minimum relative accuracy standard, at the 95% confidence level, for geo-referencing new legal surveys located within each zone as follows: survey location accuracy 95% confidence level urban areas 5 cm rural areas 20 cm remote areas 1 metre the referencing datum must be nad83 (csrs) and ties, where reasonable, shall be made to high quality officially published high precision networks (hpn), canadian base net (cbn) or active control points (acp). currently in bc there are hpn networks in the capital regional district and greater vancouver regional district areas, approximately a dozen cbn pillars, and approximately eighteen bc-acp gps reference stations. a fundamental principle of the resolution is that requirements (legislation, regulations, rules) for integration of new legal surveys, to the standards, should only be implemented when there is sufficient and easy access to the reference framework. this leaves the provinces and territories to develop and implement a plan to: 1. assess their local geo-referencing infrastructure and enhance where necessary to ensure access is practical. 2. communicate the migration strategy for mandatory geo-referencing to the survey community. 3. solicit input from land surveyors for defining geo-referencing zones. 4. assess land surveyors training requirements and bridge gaps where necessary. 5. explore the opportunity for collaborating on the development of open source tools that will facilitate and support geo-referencing new legal surveys nationally. 6. make necessary legislation, regulation or rule changes. geinformatics fce ctu 2006 28 development of os tools that support the ccog resolution bc geo-referencing requirements the current general survey instruction rules require all legal surveys outside of an integrated survey area, conducted all or in part by gps methods to be geo-referenced to two (2) metres or better at the 95% confidence level. all legal surveys within integrated survey areas must be geo-referenced to 5 cm or better at the 95% confidence level. there is also a fairly recent rule change that requires mineral tenure act surveys to be geo-referenced to 0.5 metres or better at 95% confidence level. a working group, chaired by mike taylor, was created about a year and a half ago by the government liaison committee (glc) to investigate technical aspects of geo-referencing legal surveys and the practicality of accessing the positioning infrastructure within bc. the summary report of the working group findings on geo-referencing legal surveys was presented to the members at the last annual general meeting. the report includes an excellent summary of available positioning methods and accuracies achievable. in alignment with the ccog resolution for integration of legal surveys passed in march 2005, the glc has taken the follow-up initiative to develop draft rule changes to require mandatory geo-referencing of all legal surveys in bc. the rule change will be presented to the survey rules committee (src) in the near future. in support of this proposed rule change, the glc is sponsoring a workshop at the annual general meeting which is intended to provide a good overview of positioning technologies and specifically outline the practicality of geo-referencing legal surveys in bc. geo-referencing infrastructure the provinces, territories and natural resources canada (nrcan) have worked together to modernize the geo-referencing framework across canada for many years. this collaborative effort has resulted in the readjustment of regional geodetic control networks and the official adoption of the nad83 (csrs) datum. the federal leadership and regional cooperation to enhance geo-spatial reference system has provided a much more homogenous network of geodetic control across canada. diagram 1 illustrates the hierarchy of the national geodetic control and regional control in british columbia. the philosophies and resources available within the provincial and territorial areas to develop the regional geodetic referencing framework over the last many years resulted in quite a variance in the physical and active control infrastructure developed within each region. as brad hlasny, manager of the geo-spatial reference section, base mapping and geomatics services branch (bmgs) said in the march 2004 issue of the link, “truly, we are lucky to live in b.c. when it comes to accessing a modern and effective geo-spatial reference system (with its associated positioning services and tools)”. it is important to note that bmgs is actively pursuing cooperative partnerships with local, regional governments and educational institutes to further enhance the provincial active control infrastructure. new active control points are in the process of being set up in castlegar and prince george with other areas currently under discussion and consideration. one recent advance in the geo-referencing environment is the online precise point positioning (ppp) service launched by nrcan (federal government). this service enables users with geinformatics fce ctu 2006 29 development of os tools that support the ccog resolution figure 1: accuracy one receiver to derive reasonably accurate nad83 (csrs) positions relative to the canadian active control system (cacs). the user collects gps data at a point, exports a file in the rinex file format, and then submits this file to the online ppp service. the on-line process is very fast and the user is emailed results and comprehensive processing reports. the processing can also be performed offline as well by using gps pace software. the improved accuracy of the positioning solution is derived from precise gps orbit and clock information which is collaboratively collected and distributed by agencies participating in the international gps service. single or dual frequency receivers may be used, however, precise point positioning using dual frequency receivers takes advantage of using both the carrier phase and pseudo-range observations. the ppp processing for single frequency receiver’s currently uses only the pseudo-range observations. as well, only dual frequency receivers can take advantage of direct measurement of atmospheric errors. single frequency receivers must instead rely on modelling these errors with a result of reduced positional accuracy. base mapping and geomatics services branch is currently testing the ppp service using data collected with various receivers including low cost single frequency receivers. the tests are expected to include typical legal survey observation scenarios (see diagram 2) that include ppp derived coordinates, 3d vectors (derived from differential positioning using two receivers) and conventional direction/distance observations. i believe the ppp service will have a great impact in making it practical and relatively inexpensive for land surveyors to geo-reference new legal surveys in remote areas. the availability of inexpensive new generation gps receivers that are capable of collecting multi-channel carrier phase and pseudo-range data and able to convert to a rinex file format will complement the ppp service to provide a powerful and practical positioning solution. new processing techniques and technologies also show promise for providing better positioning solutions using inexpensive gps receivers, thereby, making high accuracy positioning more practical/accessible to the survey community. geinformatics fce ctu 2006 30 development of os tools that support the ccog resolution figure 2: legal survey network integrating legal surveys the challenge facing the legal survey community is to determine whether the geodetic framework is adequate in each region to implement rules for mandatory geo-referencing of all new legal surveys, and if not, how to enhance it to be practical. the cost of ramping-up all legal survey firms to be capable of geo-referencing all new legal surveys could be considerable. one must look at not only the capital cost of survey equipment, but also the equipment validation, training, maintenance and processing resources. alternatively, many survey firms have already taken advantage of modernizing survey operations to include global positioning systems in order to gain efficiencies and to embark on new business opportunities. while the implementation of geo-referencing of all surveys will involve additional work not previously undertaken during legal surveys in many areas of the province, the long term benefits to the cadastre and to surveyors can not be understated. development of open source tools that support the ccog resolution the resolution passed by ccog fosters the integration of legal surveys across canada. however, this places an additional burden on land surveyors to find practical methods for georeferencing and integrating their conventional legal surveys to meet the standards. it would geinformatics fce ctu 2006 31 development of os tools that support the ccog resolution be very beneficial to all stakeholders to collectively explore the development of an integrated set of open source tools for processing conventional survey and coordinate observations. this would provide an effective alternative to commercial software that would benefit the national survey community. we are very fortunate to have a number of existing open source tools that provide valuable resources independently, however, when combined and enhanced would provide a very powerful and practical processing environment for land surveyors. a brief outline of some of pertinent existing open source programs follows: gama least square adjustment is a program for performing a least square adjustment of 1d, 2d and 3d survey observations. the project was initiated by ales cepek and jan pytel at the department of mapping and cartography, faculty of civil engineering, czech technical university in prague in 1998. the program name gama is derived as an acronym from geodesy and mapping. gama has been presented at fig conferences and received status of gnu license software in 2001. gama adjusts observed coordinates (with variancecovariance matrix), distances, angles/directions, height differences and 3d vectors in a local coordinate system. the observation data is formatted as an xml (extensible markup language) input file. this makes it easy to read and edit the data. gama is run simply as a command line program or via the companion gui rocinante (written by jan pytel) which is very well structured and easy to use. rocinante graphical user interface for gama is fully object-oriented cross-platform gui for creating/editing gama xml input files and running gama least square program. use-friendly desktop gis (udig) is an open-source spatial data viewer/editor based on opengis standards and is licensed under lesser gnu public license (lgpl) and includes a web map server and web feature server. udig provides a common java platform for building spatial applications with open source components. geotools is an open source java gis toolkit for developing programs compliant with open geospatial consortium standards. geotools provides a computational process of converting a position given in one coordinate reference system into the corresponding position in another coordinate reference system. gama and rocinante are very powerful survey data processing resources for land surveyors. they were originally intended for adjusting geodetic networks, however, rocinante makes it easy to input legal survey type observations and quickly run the gama least square adjustment. land surveyors commonly use commercial coordinate geometry programs to process their conventional direction and distance survey observations. in recent years some firms have incorporated gps base line processors and complimentary least square adjustment programs that may or may not include processing both conventional and gps data together. many commercial tools enable land surveyors to export an autocad tm dxf format file. this provides an opportunity to use this file format as an easy means of importing the direction and distance observations to the gama xml input format. udig provides an excellent framework for creating a plug-in extension to import the dxf file, convert to gama xml input format and run the adjustment. udig can also perform necessary coordinate transforms using the appropriate geotools code. adding a plug-in extension to import coordinate obsergeinformatics fce ctu 2006 32 development of os tools that support the ccog resolution vations (including the associated variance co-variance matrix) from precise point positioning files would create a powerful and practical integrated legal survey observations processing environment. diagram 3 illustrates a basic proposed legal survey data processing model integrating the aforementioned open source tools. the model enables a user to: 1. create observation input file using rocinante; 2. input conventional observations using a vector file input (dxf / shp) and extract to gama (note: the vector file would be coded by line-type or colour for discerning direction, distance or both type of observations); 3. input coordinate observations via ppp, text file or dialogue input; 4. input land xml file format observations and convert to gama xml input format; 5. perform coordinate transforms; 6. run gama least square adjustment, review output, edit and re-run as necessary; 7. export final adjusted network in dxf, shp, land xml format or gama output file. creating a robust and easy to use set of integrated legal survey open source processing tools will provide substantial support to land surveyors in all regions tasked with the requirement of geo-referencing legal surveys according to the new ccog resolution. we are very fortunate to have all the main open source components already existing to create an integrated processing environment that will require minimal additional effort. the march 2005 ccog resolution set the stage for the migration towards mandatory georeferencing of legal surveys across canada. i strongly encourage the national survey community to seek support from ccog and the geo-connections program in order to collectively explore, develop and implement a powerful integrated open source processing solution for integration of legal surveys. references 1. gama least square program1 2. rocinante gui for gama2 3. udig home page3 4. geotools home page4 5. geoconnections5 6. canadian council on geomatics (ccog)6 1 http://www.gnu.org/software/gama/ 2 http://roci.sourceforge.net/ 3 http://udig.refractions.net/confluence/display/udig/home 4 http://www.geotools.org/ 5 http://www.geoconnections.org/cgdi.cfm/fuseaction/partners.ccog/gcs.cfm 6 http://www.cgdi.gc.ca/ccog/ccog.html geinformatics fce ctu 2006 33 http://www.gnu.org/software/gama/ http://roci.sourceforge.net/ http://udig.refractions.net/confluence/display/udig/home http://www.geotools.org/ http://www.geoconnections.org/cgdi.cfm/fuseaction/partners.ccog/gcs.cfm http://www.cgdi.gc.ca/ccog/ccog.html development of os tools that support the ccog resolution figure 3: udig framework 7. precise point positioning service7 8. canadian spatial reference system8 9. british columbia geo spatial reference system9 7 http://www.geod.nrcan.gc.ca/ppp e.php 8 http://www.geod.nrcan.gc.ca/network |p/index e.php 9 http://srmwww.gov.bc.ca/bmgs/gsr/ geinformatics fce ctu 2006 34 http://www.geod.nrcan.gc.ca/ppp%5c_e.php http://www.geod.nrcan.gc.ca/network%5c_p/index%5c_e.php http://srmwww.gov.bc.ca/bmgs/gsr/ automatic import of surveying data to vector layers in gis open-source tool for automatic import of coded surveying data to multiple vector layers in gis environment eva stopková institute of oriental studies, slovak academy of sciences klemensova 19, 813 64 bratislava, slovak republic eva.stopkova@gmail.com abstract this paper deals with a tool that enables import of the coded data in a single text file to more than one vector layers (including attribute tables), together with automatic drawing of line and polygon objects and with optional conversion to cad. python script v.in.survey is available as an add-on for open-source software grass gis (grass development team). the paper describes a case study based on surveying at the archaeological mission at tell-el retaba (egypt). advantages of the tool (e.g. significant optimization of surveying work) and its limits (demands on keeping conventions for the points’ names coding) are discussed here as well. possibilities of future development are suggested (e.g. generalization of points’ names coding or more complex attribute table creation). keywords: surveying; automatic import of coded data; gis. introduction standard functionality of current geographic information system (gis) software includes also a tool for vector layer import from a text file. however, it is required to use just one text file for one vector output, as concurrent import of multiple layers seems to be not supported yet. output files from data collectors that enable setting up field codes during the data acquisition (e.g. total station1) usually contain the data that should be distributed to several vector layers. in case of need to import a text file containing this type of the data, it is necessary to perform the task for each layer separately. this option might be quite time-consuming and, above all, an advantage of automatic import of coded data is lost. alternatively, the data can be imported to cad automatically at first and then cad drawing can be converted to vector layers in gis environment. the purpose of this paper is to provide an overview of a script that has been developed to create multiple vector layers in gis environment directly from acquired data (and eventually to convert them to cad). 1total station is an electronic device for spatial data acquisition. it measures distances, horizontal and vertical angles and returns coordinates of the objects in three-dimensional space. geoinformatics fce ctu 15(2), 2016, doi:10.14311/gi.15.2.2 15 http://orcid.org/0000-0003-2928-6914 https://doi.org/10.14311/gi.15.2.2 http://creativecommons.org/licenses/by/4.0/ e. stopková: automatic import of surveying data to vector layers in gis state of art there are several cad or specialized graphical software products for land surveyors that provide functionality for automatic data import and drawing creation. this section will mention examples of the most commonly used options. autocad [1] in version civil 3d provides description keys [2] that enable to distribute the data to multiple layers according to points’ names. but, at first it is necessary to create layers manually and to edit the description keys. to automatically draw a line from imported data, there are available linework code sets [3] and the extension civil express tools [7]. microstation [4] enables automatic import of points according to their codes using extensions mgeo [13] or ings_geo [19]. the last one is probably known primarily in the region of central europe, as well as another software, kokeš [10], that provides similar functionality. as there has not been found any similar tool for direct import of coded data in gis environment, a script v.in.survey has been designed and developed as an add-on for open-source software grass gis [14]. python script v.in.survey the main purpose of the newly developed tool was to import coded data (e.g. output file from total station) into vector layers as automatically as possible. to deal with this task: • the data should be distributed into vector layers without any manual editing. if the data codes reflect desired layer structure (names and geometry types), automatic setup of vector layers may spare plenty of time, especially if the data should be imported to many different layers. • line or polygon objects should be drawn automatically just connecting the points according to code that indicate particular object and following the order of the points. • separated line or polygon objects that belong to one layer should be merged at last. all these tasks are supported in grass gis [14] by existing modules. script development dealt mainly with enabling communication between them (based on a quite simple system of point naming), but some functionality was modified as well to match specific needs of the data import. input data format points’ names play a substantial role in data import using v.in.survey. they provide basic information for automatic layer creation, for connecting points with lines and for polygon creation, for updating attribute table and for merging desired layers if necessary. thus each point name should consist of three parts that are separated by dots: layer_name.vector_code.point_id layer name should be short but descriptive. if the layers should be merged, then the point name should contain base string with a suffix that indicates separated objects in the layer. geoinformatics fce ctu 15(2), 2016 16 e. stopková: automatic import of surveying data to vector layers in gis vector code gives information about object type. currently, points, lines and polygons are supported. vector code string may reflect geometry (i.e. point, line or various abbreviations), but it can be descriptive as well (e.g. tree, well etc.). the script translates vector code from the rules given by the user. point id is recommended to be an integer to enable automatic number increase during the data acquisition and to keep the order of the points to draw the features properly. process description script performance is shown in fig. 1. the diagram was created using the python profiler cprofile [24] and gprof2dot [9] according to [15]. figure 1: performance profile of the script. the call function represents percentage of time n% spent in each function and its children [20], percentage of time (n%) spent in each function itself and the call count nx. pre-processing by the script includes sorting the data according to the point names (to keep points from separated layers together), data separation for vector layer import and conversion to standard format2. 2in grass gis, there are two modes for ascii data import: point format for simple list of coordinates and standard for grass native vector format. for more detailed information, see [17]. geoinformatics fce ctu 15(2), 2016 17 e. stopková: automatic import of surveying data to vector layers in gis the functionality of make_new consists of creating vector layers using v.in.ascii [17]. if necessary, conversion to the polygon layer is performed using v.centroids [6]. attribute table for line and polygon layers is created using v.db.addtable [22] (categories of line layer objects are added at first using v.category [5]). then, column that stores layer name is updated using v.db.update [21]. point layers are imported in the point format together with the attributes. section layers supports merging the features by v.patch [11] using various patterns to detect desired layers. for more details, see the case studies in the following section. if any layers have been merged, v.clean [12] is performed to clean the topology. more information on the script processes is provided in the script manual available among grass gis add-ons [16]. conversion to cad although grass gis [14] and qgis [25] provide tools for export to cad, the script optionally supplies this conversion too due to the needs of the archaeological documentation. this requires especially 3d polylines support and exporting attributes to the text layers as well. necessary functionality is based on modified parts of existing modules. compared to the original functions, the conversion does not use imported vector layers, but the same text files with separated data that have been created in pre-processing for import into vector layers in gis. the other differences between the output dxf files are summarized in tab. 1. table 1: the differences between dxf export by v.out.ogr [8] and v.in.survey module v.out.dxf [8] v.in.survey dxf content separated layer all the layers 3d polyline no yes layer content point, line, layer (vector object) boundary, centroid, layer_label (label of the object) point_label, layer_elev_pts (point elevations) centroid_label layer_label_pts (point labels) layer label layer number layer name in fact, the tool enabling export of the whole project into the dxf file instead of separated ones is available in qgis [25]. but we needed to export 3d features and the attributes as well and our another intention was to create complete archaeological map without any additional steps. case study: tell el-retaba the script has been tested on spatial data from archaeological missions at tell el-retaba (egypt). the data represents structures and small findings excavated during the seasons of 2014 and 2015. the archaeological site of tell el-retaba is situated in the eastern nile delta, in the valley called wadi tumilat. the research is focused on the pharaonic fortresses of the new kingdom, but excavations prove presence of hyksos settlement in 18th 16th century bc and the likely presence of older fortress seems to be indicated here. more information on the polish-slovak mission can be found in [18] and in other scientific papers that have been referred there. geoinformatics fce ctu 15(2), 2016 18 e. stopková: automatic import of surveying data to vector layers in gis coding system at the archaeological excavation archaeological documentation includes also maps of the excavations. this requires spatial determination of each structure and each finding (e.g. pottery, piece of metal, bone etc.). each object is given an unique name (a number of stratigraphical unit or a number of finding) and it can be described by one of items recorded on quite a short list of finding types. during the season of 2014, a system of codes was developed for the purposes of effective surveying. table 2: coding system at tell el-retaba layer name specification id description su0001 brd 01 a boundary of an archaeological layer su0001 elev 01 points within an archaeological layer su0001 pp 01 photogrammetric points for an archaeological layer sf 0001 small finds miscellaneous fixed points, stake-out etc. all the mentioned above means large datasets daily, that should be distributed to separate layers in cad or gis. that is the reason why we consider this data as an interesting sample for testing of the script. input data surveying has been performed using the total station trimble m3. positions of the measured objects were determined in three-dimensional local coordinate system retaba2014 [26], see the sample from the season of 2015 (fig. 2, trench supervisors: l. hulková and j . marko): # point_name,easting,northing,elevation ... su1716_1.brd.01,119.053,114.865,5.472 su1716_1.brd.02,119.137,114.869,5.469 su1716_1.brd.03,119.193,114.876,5.458 ... all the daily measurements were merged to one file. this dataset had to be modified in common text editor, as there were some points that did not respect sorting rules of the script. there might happen also another issues that require manual editing of the point names (see sect. 4.4). if the data has been named correctly during the acquisition, just typos correction and small edits can be expected here. test of efficiency the script was tested on the laptop, parameters of which are summarized in tab. 3. table 3: parameters of the computer that was used for the script testing operating system ubuntu 14.04 lts, 64 bit processor intel core i5, 2.60 ghz × 4 ram 8 gib geoinformatics fce ctu 15(2), 2016 19 e. stopková: automatic import of surveying data to vector layers in gis following lines demonstrate the script commands that defined rules for distribution of the archaeological data into gis project and cad drawing. the first line contains general settings for import. then, geometry types are coded and prefixes that indicate layers to be merged are summarized. at last, properties of output dxf file (name and text size) are included. v.in.survey input=retaba2014.csv outdir=retaba2014 separator=tab \ easting=2 northing=3 elevation=4 \ pt_rules=pt,pp,elev ln_rules=profile,ln poly_rules=brd,brick \ merge_lyrs=su1309_brd,su1311_brick,su1315_brd,su1331_brick_2,\ su1331_brick_3,su1357_brick,su1390_ln,su1391_ln,flatm,trench \ dxf_file=retaba2014.dxf text=0.05 -z -x v.in.survey input=retaba2015.csv outdir=retaba2015 separator=comma \ easting=2 northing=3 elevation=4 \ pt_rules=pt,r14,pp,ebr,geo,pot,ch,elev ln_rules=line,nail,ctrl,bot \ poly_rules=brd,brick \ merge_lyrs=su893_bot,trench,su1678_brd,su1681_brd,su1682_2l, \ su1691_brd,su1691_brick,su1695_1_brd,su1718_brd,su1723_brd, \ su1728_brick,su1750l_1,su1751_brick,su1759l \ dxf=retaba2015.dxf textsize=0.05 -z -x the criteria for automatic data import might be quite diverse, especially for complex datasets. criteria definition requires good knowledge of the data purpose and some planning before the acquisition, but it might be worth considering the difference between the time taken by script performance (see tab. 4) and the time taken by importing the data in other ways outlined above. to demonstrate how the dataset structure may influence processing time, the tutorial dataset [23] from north carolina was included into this test as well. this data differs from the archaeological data significantly: while the excavation data consists of plenty of separate layers that can be merged occasionally, the line vector layer in the north carolina dataset has been merged from many components. how this has affected processing time, may be seen in tab. 4. more detailed information on the case study is available in the script manual. table 4: performance time on various datasets dataset number of: processing timepoints point layers line layers polygon layers no dxf dxf nc clipped 71157 1 → 1 355 → 1 25 → 1 11m15s 17m03s retaba 2014 3587 155 → 155 26 → 20 160 → 150 4m31s 7m07s retaba 2015 3494 111 → 111 20 → 17 106 → 88 3m35s 4m41s processing time rapidly increases with number of objects merged to one vector layer. this step includes time-consuming process of removing temporary layers that store the objects before patching. the dataset from the season of 2014 contains only 100 points more than the dataset from 2015, but the processing takes much more time. in 2014, there were 160 measured polygon objects that should be stored in 150 vector layers (i.e. 10 layers should be merged), while 2015, there were only 18 layers to be patched. merging the layers increases processing time significantly for dxf conversion as well. geoinformatics fce ctu 15(2), 2016 20 e. stopková: automatic import of surveying data to vector layers in gis validation of the results imported vector layers were compared with existing cad drawings. just issues depending on the input data needed to be emerged (tab. 5): broken geometry and wrong layer name. fig. 2 – 4 illustrate the most frequented issues. table 5: summary of the issues in automatic multiple layer import issue number of layers solution (modify input data)2014 2015 messy shape 19 7 reorder input points wrong connection 12 23 distinguish objects to be separated wrong vector type 4 7 modify vector code in point name small artificial shape 13 6 remove duplicate starting vertex artificial closing of the area 1 edit manually or create line layer missing layer 5 6 split artificially merged objects correct layers 116 56 no need to modify total sum 170 105 an archaeological layer in fig. 2 consists of three vertically stratified parts that have not been distinguished in the original data file. thus, the object has been drawn just following order of the points and the shape (red hatched area) looks unrealistic. to get correct polygons (separate blue areas), different suffixes have been added to the point names in the input data. figure 2: layer to be split. trench supervisors: l. hulková and j. marko, 2015. the structure in fig. 3 was measured very effectively, without a need to go around it twice. there is nothing wrong about the way how it has been measured, but automatic drawing creation requires measurement according to topological rules to keep the order of the points. figure 3: layer to be reordered. trench supervisor: k. smoláriková, 2015. geoinformatics fce ctu 15(2), 2016 21 e. stopková: automatic import of surveying data to vector layers in gis fig. 4 represents a stratigraphical unit with a small dangle in the corner. this can be avoided simply by not observing starting (or ending) point twice. polygons are closed automatically. figure 4: layer to be cleaned. trench supervisor: v. dubcová, 2014. imported layers have been compared with the surveying journal to identify any missing layers. a few cases happened (see tab. 5), but all of them have been caused by inappropriate merging when the layers that should stay separated have been merged (as in fig. 2). new dxf drawing has been overlaid with existing documentation as well. fig. 5 shows an archaeological layer labelled with point names, elevations and with the number of stratigraphic unit located on the layer centroid. the layer is visualized using exported labels (purple text) and the object (grey line) from original documentation. figure 5: export to dxf compared with original documentation. trench supervisor: k. smoláriková, 2015. north arrow, scale bar and information about vertical datum were added later (these items have not been exported by the script). height of the text is general for the whole drawing and sometimes does not fit. then it must be modified later manually as well. future work for successful script performance, it is substantial to name the points in respect of quite simple, but also rigid system of rules that have been designed for this task. although we believe that this system may cover needs of various human activities that require surveying measurements, for other datasets might be more effective any different system of point naming. the same can be said of exporting the data into cad according to a template that requires creation of particular text layers connected to particular geometry entities. geoinformatics fce ctu 15(2), 2016 22 e. stopková: automatic import of surveying data to vector layers in gis thus, in the future it might be useful to modify the script to distribute objects into the layers according to any rule given by the user. and maybe it would be useful to unify all the possibilities of dxf export that are provided in grass gis [14] to an unique tool that would respect a variable template (i.e. content, attribute export etc.) given by the user according to the specific need of the data or of the task. other possibilities for further development consist of adding different input types (e.g. text attributes as well) and adding user-defined possibilities of automatic update of geometry properties in the attribute table. conclusion the module supports geometry and text import into multiple gis or cad layers without any manual setup of their structure and properties. this may spare user’s time extremely – in the case studies mentioned above, concurrent creation of cad drawing and gis project took minutes. even using such useful features as description keys in autocad [1], cad drawing of the same datasets took days depending on the data complexity. however, point names should reflect desired structure of the layers and the data should be acquired according to topological rules (simply said, as if they were being “drawn” on the ground). otherwise revision of the input data can take longer time than casual cad drawing creation or import into gis layers, especially in smaller datasets. acknowledgements i thank to my colleagues from the archaeological mission, especially the trench supervisors, who contributed to acquisition of the test data with their archaeological knowledge. special thanks goes to two anonymous reviewers whose constructive suggestions helped to improve the manuscript and enclosed script. archaeological mission at tell el-retaba has been supported by the slovak research and development agency (project no. apvv-0579-12). references [1] autodesk, inc. autocad. url: http://www.autodesk.com/products/autocad/ overview (visited on 14/12/2015). [2] autodesk, inc. description keys. url: http://docs.autodesk.com/civ3d/ 2013/enu/index.html?url=filescug/guid-db075e8c-5e3d-4a0c-b37990404ad181f2.htm,topicnumber=cugd30e107714 (visited on 14/12/2015). [3] autodesk, inc. linework code sets. url: http://docs.autodesk.com/civ3d/ 2013/enu/index.html?url=filescug/guid-8810116c-bb09-46fc-b0581b975e1b9c18.htm,topicnumber=cugd30e52665 (visited on 14/12/2015). [4] bentley systems. microstation. url: https://www.bentley.com/en/perspectivesand-viewpoints/topics/viewpoint/connect-edition-perspective? skid=ct_ppc_go_cntp_eme_en_t&gclid=cn__xyhd28kcfsggwwodvfacza (visited on 14/12/2015). geoinformatics fce ctu 15(2), 2016 23 http://www.autodesk.com/products/autocad/overview http://www.autodesk.com/products/autocad/overview http://docs.autodesk.com/civ3d/2013/enu/index.html?url=filescug/guid-db075e8c-5e3d-4a0c-b379-90404ad181f2.htm,topicnumber=cugd30e107714 http://docs.autodesk.com/civ3d/2013/enu/index.html?url=filescug/guid-db075e8c-5e3d-4a0c-b379-90404ad181f2.htm,topicnumber=cugd30e107714 http://docs.autodesk.com/civ3d/2013/enu/index.html?url=filescug/guid-db075e8c-5e3d-4a0c-b379-90404ad181f2.htm,topicnumber=cugd30e107714 http://docs.autodesk.com/civ3d/2013/enu/index.html?url=filescug/guid-8810116c-bb09-46fc-b058-1b975e1b9c18.htm,topicnumber=cugd30e52665 http://docs.autodesk.com/civ3d/2013/enu/index.html?url=filescug/guid-8810116c-bb09-46fc-b058-1b975e1b9c18.htm,topicnumber=cugd30e52665 http://docs.autodesk.com/civ3d/2013/enu/index.html?url=filescug/guid-8810116c-bb09-46fc-b058-1b975e1b9c18.htm,topicnumber=cugd30e52665 https://www.bentley.com/en/perspectives-and-viewpoints/topics/viewpoint/connect-edition-perspective?skid=ct_ppc_go_cntp_eme_en_t&gclid=cn__xyhd28kcfsggwwodvfacza https://www.bentley.com/en/perspectives-and-viewpoints/topics/viewpoint/connect-edition-perspective?skid=ct_ppc_go_cntp_eme_en_t&gclid=cn__xyhd28kcfsggwwodvfacza https://www.bentley.com/en/perspectives-and-viewpoints/topics/viewpoint/connect-edition-perspective?skid=ct_ppc_go_cntp_eme_en_t&gclid=cn__xyhd28kcfsggwwodvfacza e. stopková: automatic import of surveying data to vector layers in gis [5] radim blazek and martin landa. v.category. url: https://grass.osgeo.org/ grass70/manuals/v.category.html (visited on 11/12/2015). [6] hamish m. bowman and trevor wiens. v.centroids. url: https://grass.osgeo. org/grass70/manuals/v.centroids.html (visited on 11/12/2015). [7] mike caruso. drawing lines automatically by point name range. url: https:// forums.autodesk.com/t5/autocad-civil-3d-general/drawing-linesautomatically-by-point-name-range/td-p/5551586 (visited on 14/12/2015). [8] charles ehlschlaeger and radim blazek. v.out.dxf. url: https://grass.osgeo. org/grass70/manuals/v.out.dxf.html (visited on 11/12/2015). [9] josé fonseca. gprof2dot. url: https://github.com/jrfonseca/gprof2dot (visited on 06/06/2016). [10] gepro s.r.o. kokeš. url: http://www.gepro.cz/produkty/kokes/ (visited on 14/12/2015). [11] dave gerdes and radim blazek. v.patch. url: https : / / grass . osgeo . org / grass70/manuals/v.patch.html (visited on 11/12/2015). [12] david gerdes, radim blazek, and martin landa. v.clean. url: https://grass. osgeo.org/grass70/manuals/v.clean.html (visited on 11/12/2015). [13] gisoft. mgeo. url: http://www.gisoft.cz/mgeo/mgeo (visited on 14/12/2015). [14] grass development team. geographic resources analysis support system (grass) software, version 7.1. open source geospatial foundation. url: https://grass. osgeo.org/ (visited on 14/12/2015). [15] grass development team. tools for python programming. url: https://grasswiki. osgeo.org/wiki/tools_for_python_programming#cprofile_profiling_ tool (visited on 06/06/2016). [16] grass development team. vector add-ons. url: https://svn.osgeo.org/ grass/grass-addons/grass7/vector/ (visited on 14/12/2015). [17] michael higgins, james westervelt, and radim blazek. v.in.ascii. url: https:// grass.osgeo.org/grass70/manuals/v.in.ascii.html (visited on 11/12/2015). [18] jozef hudec, emil fulajtár, and eva stopková. “historical and environmental determinations of the ancient egyptian fortresses in tell el-retaba”. in: asian and african studies 24.2 (dec. 2015), pp. 247–283. url: https://www.sav.sk/journals/ uploads/120813537_hudec.pdf. [19] ings. ings_geo. url: http://www.ings.sk/?ide=93899 (visited on 14/12/2015). [20] josé fonseca et al. interpreting gprof’s output. url: http://sourceware.org/ binutils/docs-2.18/gprof/call-graph.html#call-graph (visited on 07/26/2016). [21] moritz lennert. v.db.update. url: https://grass.osgeo.org/grass71/manuals/ v.db.update.html (visited on 11/12/2015). [22] markus neteler. v.db.addtable. url: https : / / grass . osgeo . org / grass70 / manuals/v.db.addtable.html (visited on 11/12/2015). geoinformatics fce ctu 15(2), 2016 24 https://grass.osgeo.org/grass70/manuals/v.category.html https://grass.osgeo.org/grass70/manuals/v.category.html https://grass.osgeo.org/grass70/manuals/v.centroids.html https://grass.osgeo.org/grass70/manuals/v.centroids.html https://forums.autodesk.com/t5/autocad-civil-3d-general/drawing-lines-automatically-by-point-name-range/td-p/5551586 https://forums.autodesk.com/t5/autocad-civil-3d-general/drawing-lines-automatically-by-point-name-range/td-p/5551586 https://forums.autodesk.com/t5/autocad-civil-3d-general/drawing-lines-automatically-by-point-name-range/td-p/5551586 https://grass.osgeo.org/grass70/manuals/v.out.dxf.html https://grass.osgeo.org/grass70/manuals/v.out.dxf.html https://github.com/jrfonseca/gprof2dot http://www.gepro.cz/produkty/kokes/ https://grass.osgeo.org/grass70/manuals/v.patch.html https://grass.osgeo.org/grass70/manuals/v.patch.html https://grass.osgeo.org/grass70/manuals/v.clean.html https://grass.osgeo.org/grass70/manuals/v.clean.html http://www.gisoft.cz/mgeo/mgeo https://grass.osgeo.org/ https://grass.osgeo.org/ https://grasswiki.osgeo.org/wiki/tools_for_python_programming#cprofile_profiling_tool https://grasswiki.osgeo.org/wiki/tools_for_python_programming#cprofile_profiling_tool https://grasswiki.osgeo.org/wiki/tools_for_python_programming#cprofile_profiling_tool https://svn.osgeo.org/grass/grass-addons/grass7/vector/ https://svn.osgeo.org/grass/grass-addons/grass7/vector/ https://grass.osgeo.org/grass70/manuals/v.in.ascii.html https://grass.osgeo.org/grass70/manuals/v.in.ascii.html https://www.sav.sk/journals/uploads/120813537_hudec.pdf https://www.sav.sk/journals/uploads/120813537_hudec.pdf http://www.ings.sk/?ide=93899 http://sourceware.org/binutils/docs-2.18/gprof/call-graph.html#call-graph http://sourceware.org/binutils/docs-2.18/gprof/call-graph.html#call-graph https://grass.osgeo.org/grass71/manuals/v.db.update.html https://grass.osgeo.org/grass71/manuals/v.db.update.html https://grass.osgeo.org/grass70/manuals/v.db.addtable.html https://grass.osgeo.org/grass70/manuals/v.db.addtable.html e. stopková: automatic import of surveying data to vector layers in gis [23] markus neteler and helena mitasova. north carolina data set. url: https://grass. osgeo.org/download/sample-data/ (visited on 24/11/2015). [24] python software foundation. the python profilers. url: https://docs.python. org/2/library/profile.html (visited on 06/06/2016). [25] quantum gis development team. qgis geographic information system. open source geospatial foundation. url: http : / / www . qgis . org / en / site/ (visited on 05/01/2016). [26] eva stopková. survey control network retaba2014. tech. rep. revision 2. 2016. geoinformatics fce ctu 15(2), 2016 25 https://grass.osgeo.org/download/sample-data/ https://grass.osgeo.org/download/sample-data/ https://docs.python.org/2/library/profile.html https://docs.python.org/2/library/profile.html http://www.qgis.org/en/site/ geoinformatics fce ctu 15(2), 2016 26 e. stopková: automatic import of surveying data to vector layers in gis scientia est potentia welcome words karel večeře president of the czech office for surveying, mapping and cadastre karel.vecere cuzk.cz ladies and gentlemen, distinguished guests, it is a great honour for me to cordially greet you here in prague on the occasion of the knowledge is power conference. the symposium has been organised by the fig commission for professional education, and the faculty of civil engineering of the czech technical university in prague, and is being to mark the occasion of the 300th anniversary of its foundation. i am really pleased, that this symposium is taking place right here in prague and i hope that it will be a source of inspiration for the development of education in land surveying and geomatics. allow me now to say a couple of words from the position of the man who is responsible not only for the nation-wide land surveying activities and products (maps etc.) but also for the administration of the cadastre of real estates. land surveying has changed dramatically over the last 20 years. we had barely got used to electronic surveying devices, which replaced the previous optical equipment, when satellitepositioning methods arrived. built-in computers in measuring devices process the surveying results and therefore written records of surveyed data are hardly used at all. computer processing and presentation of the results of land surveying activities have undergone radical changes. only the people with excellent basic knowledge acquired in the educational process can respond to these changes and can be actively engage to them. previous technical development has brought many incentives into the educational system, the content of surveyors´ studies has been changing, and the additional education for land surveyors has been growing. new study specializations such as geomatics, which reflects this development, have appeared. we can also recognize other aspects of this progress. we do not need so much time for surveying itself and the processing of results is also much faster. could this be a cause for the decline in the number of surveyors? i personally don’t think so. the point is, however, that surveyors should evolve into further areas. the range is really wide. we are traditionally very closely connected to informatics, and universities have also had to revise their educational programmes as a result of this. i dare say that gis can be studied at all universities with the land surveying specialization. however, gis is not the only field for surveyors and for their potential value. the real estate market has been growing rapidly in the czech republic – it represents more than 10% annual growth. it is a huge sphere for our profession. due to our history the orientation in cadastral documentation is still very complicated and surveyors are able to assist in this, it means not only in executions of surveying sketches for partioning of parcels or surveying of geinformatics fce ctu 2007 19 scientia est potentia welcome words new buildings. despite this fact the combination of a land surveying company with a real estate agency is still rather exceptional. it might be an area for some changes in the content of educational programmes, which should enable to acquire wider knowledge in the field of evaluation of real estates or even in economy with focus on the real estate market. i think that our universities are able to prepare highly skilled graduates even in the field of land consolidation, and possibly in further fields important for cooperation on projects focused on better use of both urban and rural areas. that is the reason why surveyors often take part in teams preparing and launching such projects. i could go on listing fields suitable for surveyors for many hours, the point is, however, whether they will get a sufficient base for these specializations during their study and whether we will be able to motivate them enough for these interdisciplinary fields. by the end of 2006 a total of 720 graduates from technical universities focused on surveying had worked in our offices but beside them also 432 graduates from faculties of law and 343 graduates from other universities, particularly it specialists and economists. in addition to that we employ more than 3 000 alumni of secondary schools, half of them being surveyors. the great number of lawyers is a result of the fact, that we are responsible for the registration of rights to real estates. we execute legal examination of deeds about real estates and on this base we decide about the registration of the right in the cadastre of real estates or its refusal. the experts in informatics support not only everyday operation of it but also further development of cadastral information system and other gi systems. as i have already mentioned, the cadastre of real estates in the czech republic includes and connects at present 3 main professions: 1. surveyors, particularly responsible for the mapping part of the cadastre, but also supporting other technical activities, 2. lawyers, supporting the registration of rights and supervising other administrative activities, 3. experts in informatics, taking care of the complex databases of the territory and supporting a wide range of services for customers and employees. the graduates of different universities have to cooperate really closely. without such cooperation we could not achieve satisfactory results. general overview of each other’s work and a certain overlap into the specialization of their colleagues gained already at university are very useful for cooperation of experts from different specializations. it seems to me, that there is no problem to provide enough education in informatics for surveyors. universities produce graduates in surveying field who are more than just skilled users of it and who are competent even in areas requiring deeper knowledge of informatics. it is interesting that nowadays even graduates of faculties of law have not serious problems with informatics and are able to make use of information technologies on a high level. more complicated situations occur in the cases that require certain knowledge of law. the surveyor working in the cadastre also needs knowledge of civil code, land law, administrative procedures law, urban planning law and others. it is rare to find a graduate of both branches – surveying and law, but from my point of view it isn’t so important. the important thing is to improve the legal awareness of surveyors during their education – at least in land, building and administrative laws. and that is not only the problem of surveyors in the state geinformatics fce ctu 2007 20 scientia est potentia welcome words administration. the surveyor should be able to act as a complete advisor at least regarding the technique of real estates business. to do so, they have to have knowledge of further processes provided usually by experts from other branches such as urban planning, building proceeding, water proceeding or they need skills of some important aspects of the nature and landscape protection. unfortunately, surveyors often acquire this necessary knowledge from their working experience, and often in a very complicated way. it is necessary to mention, that the graduates of faculties of law face the same problem of insufficient awareness in some technical fields. it is difficult in many cases to provide them with sufficient basic technical knowledge to simplify their orientation in surveying sketches. they do not get such skills at the faculty. nevertheless, this knowledge is the essential for correct registration of the proprietary rights, of mortgages or easements, apart from correct making a deed about the real estate. that is why we organize further education in this basic field of cadastre for our lawyers. ladies and gentlemen, i have tried to address the problems faced by state administration responsible for cadastre of real estates and surveying in connection with university education. we realize more than 1,5 mil. registrations of changes into the cadastre, introduce 130 thousand survey sketches into maps and update other geographical documentation of the cz territory each year. this is, however, significant part of land surveying in this country, but it does not cover everything. there exist both great land-surveying companies with a wide offer of services and products acting on the whole state territory and small land surveying companies specialized – for instance – on survey sketches and acting only on a limited territory. there are construction companies with their special needs for high accuracy while laying out and monitoring buildings. there are utility companies with their specific needs for documentation in technical maps, today on the base of gis, for professionals as well as for common users. there is public administration demanding updated and precise information about the territory for its decisions. all these diverse ideas and professional needs should be satisfied. universities ought to strive to offer skilled graduates equipped with the necessary knowledge and skills and above all with the ability to proceed further in their personal development. this task is definitely not simple, because the educational process cannot omit other even more important aspects and universities have to develop also the language skills or economic awareness of students and last, but not least, to address the general personal development including the ability to present the results of their work. well, i hope this conference will inspire many new ideas which will enrich the education of graduates of land surveying and geomatics and even influence lifelong learning. we all – employers and employees as well as our customers, business partners and universities – will benefit from such a development in the future. i hope, that you find the prague atmosphere hospitable and inspiring. at the czech technical university in prague – such discussions have been held for 300 years and without such debates no university would be able to fulfill its mission. i wish your conference success once again. thank you for your attention. geinformatics fce ctu 2007 21 scientia est potentia welcome words karel večeře, scientia est potentia1 – fig commission 2 symposium, prague, czech republic, 7-9 june 2007 1 http://geoinformatics.fsv.cvut.cz/wiki/index.php/scientia est potentia geinformatics fce ctu 2007 22 http://geoinformatics.fsv.cvut.cz/wiki/index.php/scientia_est_potentia svět se měńı nenápadně pro titulek úvodńıho slova k druhému ročńıku semináře geoinformatika jsem si vyp̊ujčil název stejnojmenné knihy jaroslava žáka. na našem prvńım setkáńı v červnu 2006 jsme informovali o otevřeńı našeho nového oboru geoinformatika, v rámci studijńıho programu geodézie a kartografie, o našich plánech a očekáváńıch. jestliže budu dnes opět hovořit v úvodu k semináři o uplynulém roku předevš́ım z našeho pohledu, pohledu studijńıho programu geodézie a kartografie, pak je to proto, že podobná setkáńı jsou pro nás cenným zdrojem inspirace a ponaučeńı a snad naše zkušenosti mohou být ve srovnáńı inspiraćı i pro naše kolegy. za jeden rok se toho př́ılǐs nezměnilo a pokud ano, pak změny byly nenápadné. jednak se nám podařilo v́ıceméně sjednotit prvńı ročńıky obor̊u geodézie a kartografie a geoinformatika. pro nový obor to předevš́ım znamenalo doplněńı dvou tradičńıch předmět̊u aplikovaná optika a elektronické metody, do oboru geodézie a kartografie byl naopak do druhého semestru nově zařazen úvodńı kurz databázové systémy. protože kapacita studijńıch plán̊u je omezená, došlo k redukci jednoho ze společenskovědńıch předmět̊u prvńıho ročńıku. v konkurenci nab́ıdky r̊uzných škol a studijńıch obor̊u je podle mého názoru nezbytné, aby př́ıbuzné i stejné obory r̊uzných škol byly jasně profilovány, r̊uznost je cenněǰśı než uniformita. základem oboru geoinformatika na stavebńı fakultě čvut je pochopitelně akcent na geodezii, zařazeńı předmět̊u aplikovaná optika a elektronické metody je plně v souladu s našim technickým zaměřeńım. zájem o nový obor geoinformatika se ve druhém roce nijak zásadně nezvýšil, opět budeme v prvńım ročńıku otev́ırat pouze dva kroužky. v nastávaj́ıćım akademickém roce 2007-2008 ale zároveň otev́ıráme prvńı ročńık magisterského studia geoinformatiky, předevš́ım pro bakaláře našeho tradičńıho oboru geodézie a kartografie. do prvńıho ročńıku se přihlásilo 16 student̊u, což je řádově jedna čtvrtina všech absolvent̊u. v souvislosti s přechodem na strukturované studium pro náš studijńı program vyvstává jistá komplikace v přij́ımańı bakalář̊u z jiných škol. bakalářské studium je totiž na studijńım programu geodézie a kartografie stavebńı fakulty čvut akreditováno jako čtyřleté. je tedy otázka, jak řešit přij́ımáńı tř́ıletých bakalář̊u do našich navazuj́ıćıch magisterských programů. pokud neprokáž́ı u přij́ımaćı zkoušky uchazeči znalosti v rozsahu, který vyžadujeme u našich bakalář̊u, může jim přij́ımaćı komise navrhnout přijet́ı do bakalářského studia na jeden rok a po úspěšném splněńı individuálńıho studijńıho plánu pak přijet́ı do magisterského oboru. jestliže naše čtyřleté bakalářské obory představuj́ı jistou komplikaci pro mobilitu student̊u, pak na druhé straně poskytuj́ı významně větš́ı prostor pro odbornou př́ıpravu bakalář̊u ve srovnáńı tř́ıletými obory. konkrétně źıskávaj́ı naši bakaláři ukončené vzděláńı v katastru nemovitost́ı. v magisterských oborech se již katastr nevyučuje. geinformatics fce ctu 2007 5 svět se měńı nenápadně za jednu z absurdit naš́ı doby považuji, že podle zeměměřického zákona (zák. č. 200/1994 sb. v platném zněńı) se ale naši bakaláři nemohou ucházet o źıskáńı úředńıho oprávněńı pro ověřováńı výsledk̊u zeměměřických činnost́ı pro správu a vedeńı katastru nemovitost́ı. nikdo přitom nezpochybňuje jejich odbornou zp̊usobilost, v zákoně je ale explicitně požadováno vzděláńı alespoň magisterského studijńıho programu, pětiletá odborná praxe a složeńı zkoušky odborné zp̊usobilosti. zjevně naše společnost neńı na bakaláře ještě plně připravena. věř́ım, že jde o přechodný stav, který nebude mı́t dlouhého trváńı. zmiňuji zde tento př́ıpad proto, že jsem si jist, že nejde o jediný př́ıklad, kdy je vysokoškolské bakalářské vzděláńı degradováno zákonem, resortńımi předpisy či jen pouhou prax́ı a požadavky profesńıch komor. evropská idea mobility pracovńıch sil a všeobecného uznáváńı vzděláńı tak naráž́ı na lokálńı administrativńı překážky. na struktuře našich student̊u se zač́ıná projevovat p̊usobeńı státńı školské politiky, která usiluje o zvýšeńı počtu vysokoškolsky vzdělaných obyvatel v populaci. jde o celoevropský, resp. celosvětový, trend a nemá smysl s ńım polemizovat. nav́ıc jde v principu o správný ćıl. protože ale nelze rozhodnut́ım ministerstva školstv́ı zároveň zvýšit pr̊uměrnou inteligenci populace, ani zajistit jej́ı vyšš́ı pracovitost a ṕıli, struktura našich student̊u se nenápadně ale nezadržitelně měńı. tradice a koeficienty ekonomické náročnosti na technických školách vedou k masové produkci student̊u. v zásadě pro náš studijńı program neńı problém počet zájemc̊u o studium, ale jejich kvalita. počty našich přij́ımaných student̊u se přitom dlouhodobě drž́ı řádově na stejné úrovni [6]. skutečnost́ı ovšem je, že dnes přij́ımáme i studenty, kteř́ı by před necelými deseti lety neměli nejmenš́ı naději na přijet́ı. stále přitom máme i výjimečné a skvělé studenty, v posledńı době ale pozoruji trend, kdy podpr̊uměrńı studenti maj́ı tendenci své nadaněǰśı kolegy demotivovat. at’ již máme jakékoli názory na e-learning a poč́ıtačem podporovanou výuku, je mimo jakoukoli diskusi, že jde o technologie, bez kterých se dlouhodobě neobejdeme. zaj́ımavý článek na téma e-learningu v oblasti geoinformatiky přednesli na semináři scientia est potentia petr soukup a pavel žofka [3]. jak jsem již zmı́nil, jednou z nenápadných změn v našich studijńıch plánech bylo sjednoceńı prvńıch dvou semestr̊u na oborech geodézie a kartografie a geoinformatika a v úvodńım kurzu základ̊u informatiky zařazeńı databáźı a sql i pro geodety. existuje snad bezpočet internetových kurz̊u a tutoriál̊u na dané téma, z nich ale svým zp̊usobem vyniká a gentle introduction to sql [4]. a gentle introduction to sql nás s janem pytlem inspiroval k úvaze o založeńı vlastńıho podobného projektu sqltutor, jehož pracovńı alfa verze je k nahlédnut́ı na http://josef.fsv.cvut.cz/cgi-bin/cepek/sqltutor implementace je přitom podružná, základńı myšlenka projektu je sestavit sadu sql úloh a z nich podle potřeby dynamicky generovat testy, resp. kvizy, se závěrečným bodovým hodnoceńım. pokud se projekt ukáže jako životaschopný, budou pochopitelně zdrojové kódy zveřejněny pod gpl licenćı, právě tak jako datové sady a otázky (zde přicháźı do úvahy licence gnu fdl). geinformatics fce ctu 2007 6 http://josef.fsv.cvut.cz/cgi-bin/cepek/sqltutor svět se měńı nenápadně v úvodńım kurzu se naši studenti seznamuj́ı s databáźı postgresql (že šlo o dobrou volbu dokazuje mimo jiné i to, že pro př́ı̌st́ı verzi arcgis připravuje firma esri rozš́ı̌reńı o podporu databáze postgresql). pro praktické procvičováńı sql použ́ıváme vývojové prostřed́ı emacsu. databáze otázek pro sqltutora je generována z textových soubor̊u ve formátu, který lze spouštět v emacsu pomoćı sql-postgres módu. veškeré otázky tedy mohou být zveřejněny jednak přes interaktivńı př́ıstup k databázi otázek, jednak zcela nezávisle jako jednoduché textové soubory, na které jsou studenti zvyklý ze cvičeńı. ideálńım stavem by byla dostatečně rozsáhlá databáze otázek (později k sql select̊um plánujeme doplnit i operace insert, delete a update a otázky typu ”autoškola”, s výběrem odpověd́ı z dané nab́ıdky), která by umožnila generováńı kvalitńıch test̊u. pokud se podař́ı správně nastavit bodové ohodnoceńı náročnosti otázek, je možné použ́ıt systém jako formu zápočtové ṕısemky, př́ıpadně i jako filtr, který u zkoušky odděĺı studenty, kteř́ı maj́ı např́ıklad šanci se ucházet o výbornou. při počtu řádově 120 student̊u to může být významná úspora času a úsiĺı spojeného se zkoušeńım. takto ušetřený čas lze věnovat individuálńı výuce a zkoušeńı nejlepš́ıch student̊u. sqltutor předpokládá, že jednotlivé úlohy budou kategorizovány. zavedeńı kategoríı otázek přitom bude jedńım z nejnáročněǰśıch úkol̊u. kvalitńı kategorizace by ale byla cestou pro automatizované zkoušeńı, ve kterém by bylo možné ověřit, zda se např́ıklad student pouze zmýlil v jedné konkrétńı otázce, nebo zda opravdu nerozumı́ jisté standardńı úloze. při rekapitulaci uplynulého roku nesmı́m zapomenout na oslavy 300 let výroč́ı založeńı čvut. na oboru geodézie a kartografie jsme v rámci oslav uspořádali ve spolupráci s druhou komiśı fig mezinárodńı symposium scientia est potentia. sborńık recenzovaných referát̊u ze symposia byl vydán tiskem [5] a zároveň i v elektronické podobě. v druhém č́ısle časopisu geoinformatics fce ctu uvád́ıme tři př́ıspěvky, které z časových d̊uvod̊u nebyly publikovány ve sborńıku [5] (autoři karel večeře, václav slaboch a janis strauhmanis). př́ıprava a organizace mezinárodńıho symposia byla poměrně náročná a do značné mı́ry šla na úrok př́ıpravy druhého ročńıku workshopu geoinformatika, který jsme museli přesunou na zář́ı. tak jako minulý rok, tak i letos zajistil veškerou př́ıpravu workshopu i vydáńı sborńıku martin landa, kterému bych chtěl upř́ımně poděkovat (věř́ım, že nejen za sebe). pokud se v organizaci druhého ročńıku workshopu vyskytnou nějaké organizačńı pot́ıže či nedostatky, pak plně padaj́ı na moji hlavu a předem se za ně omlouvám. aleš čepek 19. zář́ı 2007, praha [1] jaroslav žák, svět se měńı nenápadně, nakladatelstv́ı olympia, druhé vydáńı, praha 1971, 132 stran [2] martin landa, aleš čepek, eds.: geoinformatics, faculty of civil engineering, ctu prague, sborńık referát̊u [5], 183 pages, 2006, issn 1802-2669 [3] petr soukup and pavel žofka: experience in the application of e-learning tools in teaching, in scientia est potentia, czech technical university, prague, czech republic, 7-9 june, 2007, pp. 123-134. isbn 978-80-01-03718-8 [4] a gentle introduction to sql http://www.sqlzoo.net/ geinformatics fce ctu 2007 7 http://www.sqlzoo.net/ svět se měńı nenápadně [5] aleš čepek ed.: scientia est potentia, proceedings of the symposium dedicated to the development of curricula organized jointly by fig commission 2 and the faculty of civil engineering ctu in prague, prague, 7-9 june 2007, published by the czech technical university in prague. printed by ctu publishing house, isbn 978-80-01-03718-8 http://geoinformatics.fsv.cvut.cz/wiki/index.php/scientia est potentia download [6] aleš čepek, leoš mervart, and josef kopejska. motivace pro studium geoinformatiky. in jǐŕı horák and pavel děrgel, editors, sborńık sympozia gis ostrava 2007. hornicko-geologická fakulta, institut geoinformatiky, všb – technická univerzita ostrava, 28.-31.1. 2007 geinformatics fce ctu 2007 8 http://geoinformatics.fsv.cvut.cz/wiki/index.php/scientia_est_potentia_-_download openstreetmap over wms přemysl vohnout, jáchym čepický czech centrum for science and society, help service remote sensing vohnout@ccss.cz, jachym@bnhelp.cz keywords: wms, openstreetmap, mapserver abstract this paper discuss the issues which we faced, while preparing wms server with openstreetmap data of whole europe. this article is divided into three sections. first is about mandatory applications which are required for working wms service with openstreetmap data. second is focused on tuning up postgresql. third is focused on rendering time improvement of layers. prerequisites there is need of data first of all. data can be acquired from several repositories where are made live dumps from osm databases. there is possibility to choose data by geographic and country distribution. you can download for example shapefiles, country borders or ”osm dump”. this data can be obtained from http://downloads.cloudmade.com or http://geofabrik.de, ... we used data from geofabrik.de. we are using mapserver for rendering data from postgresql to wms. there are some requirements which are required to keep for best result of maps. � mapserver of version 5.4 and higher. this is required due to change of labels rendering machine. � application osm2pgsql. this is a little bit still development application. � postgresql of version 8.0 and higher with postgis. i recommend 8.3 with postgis 1.3.5 and higher. there was some bug in 1.3.3 which was returning bad bounding box. � optional mapserver-utils – this project contains mapfiles for advanced drawing of layers from openstreetmap. you needn’t to use this mapfiles but it’s recommended. this can be obtained from http://code.google.com/p/mapserver-utils. because mapserver 5.4 and postgis 1.3.5 is very recent user will probably don’t find it in repository of almost every distribution of linux if you are not using some kind of rollingupdates distribution (like gentoo). here is small howto create your own deb packages in debian. first of all you will need pbuilder application. this application will make ”fake” image of system. after small tweaking of config file you can start making debian packages. geinformatics fce ctu 2009 63 http://downloads.cloudmade.com http://geofabrik.de http://code.google.com/p/mapserver-utils vohnout p., čepický j.: openstreetmap over wms aptitude install pbuilder pbuilder create --distribution lenny (lenny is last branch of debian, etch is previous) editor /etc/pbuilderrc and put there something like this: mirrorsite=http://ftp.cz.debian.org/debian/ othermirror="deb http://www.backports.org/debian lenny-backports main | \ deb file:///var/cache/pbuilder/result ./" hookdir=/etc/pbuilder/hook.d bindmounts="/var/cache/pbuilder/result /home /usr/src" extrapackages="debian-backports-keyring proj=4.6.1-5~bpo50+1 proj-bin libproj0 libproj-dev" � download package control files from svn://svn.debian.org/svn/pkg-grass/packages//trunk somewhere where you can find them easily (for example /usr/local/src). � download tarballs to directory from previous point. � last step is to create package by pdebuild --use-pdebuild-internal if it is everything ok you will have package ready to install. postgresql part postgresql is configured for secure and reliable handling of data by default, but if you want to handle big amounts of data it’s better to do some tweaks. you should think about your hardware in the beginning for better performance of your database management system. memory is faster than hdd but it is not possible to buy lot of memories because there is limited number of memory slots. ssd disks are slower but their size is still not so big (max 128gb) and they are more expensive. most used scenario is disks connected into raid. raid1 at least but i prefer raid5. raid5 is slower when writing data to disks but faster when reading data from disks. software tweaking of postgresql can be done in config file which you can find in clusterdir/postgresql.conf. � shared buffers – this parameter defines amount of memory in which postgresql will hold requests. this should be set to thousands (one buffer is 8kb). some sources writes one quarter of memory another writes set this to 256mb (32768). � maintenance work mem – specifies amount of memory which is used for maintenance operations like vacuum, create index ... this should be set to bigger amount than default. very usable if you are using autovacuum operation. � effective cache size – this parameter has impact on planning operation of postgresql. bigger amount will trigger using index scans, lower will use sequential scans. � checkpoint segments – if you will see this in your log ”checkpoints are occurring too frequently (29 seconds apart)” set this parameter bigger then 3. geinformatics fce ctu 2009 64 vohnout p., čepický j.: openstreetmap over wms getting data to postgresql data from osm format are put into postgresql by osm2pgsql software. before user can use this software he must have spatial enabled database and must have existing mercator projection (or ”google projection”) in /usr/share/proj/epsg and spatial ref sys table. this projection was added to recent version of epsg. creating spatial enabled database: createdb osm createlang plpgsql osm psql -f /usr/share/postgresql-8.3-postgis/lwpostgis.sql osm psql -f /usr/share/postgresql-8.3-postgis/spatial_ref_sys.sql osm after this log into postgresql using osm database and add ’google projection’ into spatial ref sys table: psql -d osm insert into spatial_ref_sys (srid, auth_name, auth_srid, srtext, proj4text) values (900913, ’spatialreference.org’, 900913, ’projcs["unnamed",geogcs["unnamed ellipse",datum["unknown", spheroid["unnamed",6378137,0]],primem["greenwich",0], unit["degree",0.0174532925199433]],projection["mercator_2sp"], parameter["standard_parallel_1",0],parameter["central_meridian",0], parameter["false_easting",0],parameter["false_northing",0], unit["meter",1], extension["proj4","+proj=merc +a=6378137 +b=6378137 +lat_ts=0.0 +lon_0=0.0 +x_0=0.0 +y_0=0 +k=1.0 +units=m +nadgrids=@null +wktext +no_defs"]]’, ’+proj=merc +a=6378137 +b=6378137 +lat_ts=0.0 +lon_0=0.0 +x_0=0.0 +y_0=0 +k=1.0 +units=m +nadgrids=@null +wktext +no_defs’); and last step before inserting data into database is to create ’google projection’ in /usr/share/proj/epsg. add this line <900913> +proj=merc +a=6378137 +b=6378137 +lat_ts=0.0 +lon_0=0.0 +x_0=0.0 +y_0=0 +k=1.0 +units=m +nadgrids=@null +no_defs for faster queries it’s recommended to put projections which are used in the beginning of the epsg file. it’s possible to start importing data into database now. this is done with osm2pgsql application which have some parameters. -d will specify database, -p – prefix of created tables, -c define amount of space for caching. if you run out of memory try turning on -s osm2pgsl -d osm -p osm -c 2048 at [1] is written some after-importing operations which makes better output from database. speeding up rendering time mapfiles from mapserver-utils are nice but they are not usable for big amount of data like whole europe. if you don’t have some very powerful grid and want to see map before your coffee get cold, you must do some optimizations for small scales (from ∞ to 1 : 500000). for example osm line table has 9.5m records and in scale 1:5 000 000 are shown these layers: geinformatics fce ctu 2009 65 vohnout p., čepický j.: openstreetmap over wms coastline, borders, cities with more than 100k citizens and highways (motorways). if you are doing select of highways from osm line after using spatial index there are 5.8m of records left and after apply of where clause (highway=’motorway’) there are only 50k of records so biggest optimization for small optimization is to do materialized view. materialized views are views which are created physically on disks and act like real tables. normal views are only ”links” to tables so if user is doing some queries on normal views than the query is done on all data in those tables. example will be done for motorways: drop table if exists osm_highway_motorways; delete from geometry_columns where f_table_name=’osm_highway_motorways’; create table osm_highway_motorways as select way, osm_id ,highway,ref, name, z_order from osm_line where highway = ’motorway’ order by z_order; create index osm_highway_motorways_index on osm_highway_motorways using gist (way); create index osm_highway_motorways_pkey on osm_highway_motorways using btree (osm_id); insert into geometry_columns (f_table_catalog, f_table_schema, f_table_name, f_geometry_column, coord_dimension, srid, "type") values (’ ’, ’public’, ’osm_highway_motorways’, ’way’, 2, 900913, ’linestring’); grant select on osm_highway_motorways to "www-data"; first two lines are used for clearing things done before. third line is creating materialized view. 4th and 5th (if you are calculating only semi-colons) line are creating indexes. indexes are good that it makes tree structure upon data and this make searching through data faster. user has 3 possibilities in postgis with indexes. b-tree, r-tree and gist. b-tree is only for one dimension data. r-tree is only for multidimensional data. gist is combining these two indexes in one. this is reason why this index is used in most cases. 6th line inserts geometry into geometry columns table. insert geometries into geometry columns table is needed because some gis clients is looking for geometries only into this table. last line grants select privileges to user which is used in mapfile to access database. next possibility of optimization is to use cluster command. this command will sort data in table by selected index. this is very useful with use of mapserver because it is doing spatial query first. another optimization can be done on client side. openlayers allows to make tiles. this mean that it cuts map window into small parts which are faster to render. this can be set by singletile: false. this parameter will cut whole map area into several tiles (depends on size of tile) and makes query for every tile separately. but be careful about partial labels so setup gutter in openlayers. gutter will expand tile for specified amount of pixels and it will draw labels behind borders of tile. external resources 1. rendering openstreetmap data with mapserver, online http://trac.osgeo.org/mapserver/wiki/renderingosmdata geinformatics fce ctu 2009 66 http://trac.osgeo.org/mapserver/wiki/renderingosmdata postgresql 8.3 pavel stěhule department of mapping and cartography, faculty of civil engineering czech technical university in prague stehule kix.fsv.cvut.cz kĺıčová slova: database systems, open source abstrakt práce na open source databáźıch pokračuj́ı nezadržitelným tempem. vývojáři se muśı vyrovnat s rostoućımi požadavky uživatel̊u na objem dat ukládaných do databáźı, na náročněǰśı požadavky na odezvu atd. zat́ım nedostǐznou metou je implementace celého standardu ansi sql 200x. všechny databáze z velké trojky (firebird, mysql a postgresql) použ́ıvaj́ı multigeneračńı architekturu, cenově orientované hledáńı optimálńıho prováděćıho plánu, write ahead log atd. mysql se profiluje jako sql databáze schopná použ́ıvat specializované databázové backendy schopné maximálńı efektivity pro určité konkrétńı prostřed́ı. postgresql je široce použitelná databáze, těž́ıćı z vynikaj́ıćı stability, s perfektńı rozšiřitelnost́ı a komfortńım prostřed́ım. konečně firebird je vynikaj́ıćı embeded databáze, která se osvědčuje v tiśıćıch instalaćıch na desktopech. podle p̊uvodńıho plánu mělo doj́ıt k uvolněńı verze 8.3 koncem léta – mělo j́ıt o verzi obsahuj́ıćı patche dokončené pro 8.2, ale v té době nedostatečně otestované. nakonec se ukázalo, že ty nejd̊uležitěǰśı patche je třeba dopracovat. jednalo se o tak atraktivńı vlastnosti, že se rozhodlo s vydáńım nové verze počkat. 8.3 obsahuje integrovaný fulltext, podporu opožděného potvrzováńı (asynchronńı commit), synchronizované sekvenčńı čteńı datových soubor̊u, úsporněǰśı ukládáńı dynamických datových typ̊u (kraťśıch 256byte), hot updates a sofistikovaněǰśı aktualizaci index̊u (hot indexes). z patch̊u připravených pro 8.2 se v 8.3 neobjev́ı podpora bitmapových index̊u a podpora aktualizovatelných pohled̊u. p̊uvodńı řešeńı založené na pravidlech (rules) bylo př́ılǐs komplikované. 8.3 obsahuje podporu aktualizovatelných kurzor̊u, a je docela dobře možné, že aktualizovatelné pohledy budou ve verzi 8.4 implementovány právě s pomoćı této tř́ıdy kurzor̊u. vývoj pokračuje implementaćı daľśıch modul̊u sql. ve verzi 8.3 je to konkrétně sql/xml (rozš́ıřeńı ansi sql), která umožňuje operace s xml dokumenty př́ımo v databázi a zjednodušuje generováńı xml dokument̊u. zásadńı (interńı) změnou je zkráceńı hlavičky řádku z 28 bajt̊u na 24 bajt̊u. daľśı změnou, která by měla vést k minimalizaci velikosti uložených dat je diverzifikace typu varlena. tento typ se v postgresql použ́ıvá pro serializaci hodnot všech typ̊u s variabilńı délkou. trochu připomı́ná string v pascalu. prvńı byty nesou informaci o délce, daľśı nesou obsah. starš́ı verze postgresql znaly jen typ varlena s 4byte informaćı o geinformatics fce ctu 2007 91 postgresql 8.3 délce. 8.3 podporuje také typ varlena s 1byte záhlav́ım. úspora by se měla projevit hlavně u typu numeric a krátkých řetězc̊u. k překladu postgresql lze poč́ınaje touto verźı použ́ıt jak gcc, mingw tak microsoft visual c++ (na platformě microsoft windows). integrace tsearch2 integrace tsearch2 do jádra postgresql je výsledkem dlouholetého úsiĺı olega bartunova a teodora sigaeva. dı́ky integraci se zjednoduš́ı konfigurace fulltextu a pro určité jazyky (pro které existuje podpora v projektu snowball) lze fulltext použ́ıvat hned po instalaci databáze. čeština bohužel mezi tyto jazyky nepatř́ı – je potřeba provést několik daľśıch operaćı. předně převést open office slovńıky do kódováńı utf8 a zkoṕırovat je do př́ıslušného podadresáře postgresql. dále zaregistrovat slovńık a provést tzv. konfiguraci. kromě konfigurace jsou rozd́ıly mezi integrovaným fulltextem a tsearch2 (z verze 8.2) sṕı̌se kosmetické. create text search dictionary cspell1( template=ispell, dictfile = czech, afffile=czech, stopwords=czech); create text search configuration cs (copy=english); alter text search configuration cs alter mapping for word, lword with cspell, simple; postgres=# select * from ts_debug(’cs’,’přı́liš žlut’oučký kůň se napil žluté vody’); alias | description | token | dictionaries | lexized token -------+---------------+-----------+-----------------+--------------------word | word | přı́liš | {cspell,simple} | cspell: {přı́liš} blank | space symbols | | {} | word | word | žlut’oučký | {cspell,simple} | cspell: {žlut’oučký} blank | space symbols | | {} | word | word | kůň | {cspell,simple} | cspell: {kůň} blank | space symbols | | {} | lword | latin word | se | {cspell,simple} | cspell: {} blank | space symbols | | {} | lword | latin word | napil | {cspell,simple} | cspell: {napı́t} blank | space symbols | | {} | word | word | žluté | {cspell,simple} | cspell: {žlutý} blank | space symbols | | {} | lword | latin word | vody | {cspell,simple} | cspell: {voda} (13 rows) podporu fulltextu nad konkrétńım sloupcem můžeme aktivovat např. vytvořeńım funkcionálńıho gin indexu. create index data_poznamka_ftx on data using gin(to_tsvector(’cs’, poznamka)) a vyhledávat operátorem @@ (fulltextové vyhledáváńı) select * from data where to_tsvector(’cs’,poznamka) @@ to_tsquery(’žlutá & voda’) podpora sql/xml zásadně se změnila podpora xml. to co v předchoźıch verźıch se neohrabaně řešilo přes doplňky se nyńı dostalo př́ımo do jádra. jednak jsou k dispozici funkce generuj́ıćı xml (xmlelement, xmlforest, ...) jednak jsou tu funkce mapuj́ıćı obsah tabulky do xml. výsledkem může být xml schéma (použitelné pro validaci nebo pro přenos definice tabulky), xml dokument s integrovaným schématem, nebo samotný xml dokument. jelikož je výstupńı geinformatics fce ctu 2007 92 postgresql 8.3 formát standardizován v sql/xml, neměl by být přenos těchto tabulek problémem (mezi těmi databázemi, které sql/xml podporuj́ı). pavel=# create table a(a date, b varchar(10)); create table pavel=# insert into a values(current_date, ’něco’),(current_date+1, null); insert 0 2 pavel=# select table_to_xml_and_xmlschema(’a’, true, false, ’’); table_to_xml_and_xmlschema ----------------------------------------------------------------------------------------------- 2007-02-19 něco 2007-02-20 stejného výsledku dosáhneme pomoćı funkćı generuj́ıćıch xml. jejich použit́ı je univerzálněǰśı, a o trochu komplikovaněǰśı: pavel=# select xmlelement(name a, xmlagg( xmlelement(name row, xmlforest(a,b)))) from a; xmlelement ---------------------------------------------------------------------------2007-02-19něco2007-02-20 (1 řádka) v postgresql stále chyb́ı collate. podařilo se alespoň rozš́ı̌rit klauzuli order a to geinformatics fce ctu 2007 93 postgresql 8.3 v pozicováńı řádk̊u s hodnotou null (order by .. nulls first/last). adekvátně tomu se rozš́ı̌rili parametry u btree index̊u. refaktorizaćı kódu se doćılila podpora null v indexech. starš́ı verze nedokázaly indexovat hodnotu null. postgres=# explain select count(*) from fx where a is null; query plan ------------------------------------------------------------------aggregate (cost=8.28..8.29 rows=1 width=0) -> index scan using bb on fx (cost=0.00..8.28 rows=1 width=0) index cond: (a is null) (3 rows) nové datové typy a rozš́ı̌reńı možnost́ı stávaj́ıćıch datových typ̊u ve verzi 8.2 postgresql podporuje několik nových datových typ̊u: xml zajǐst’uj́ıćı validitu obsahu, uuid (universal unique identifier) dle rfc 4122. vlastńı generátor je v contribu uuid-ossp (je potřeba doinstalovat package uuid a uuid-devel). k dispozici je deset r̊uzných zp̊usob̊u generováńı jednoznačných univerzálńıch identifikátor̊u. dále je tu možnost použ́ıvat vlastńı výčtové typy (zjevně inspirováno mysql). na rozd́ıl od mysql v postgresql je nutné před použit́ım vytvořit pro určitý seznam hodnot vlastńı typ. jeho použit́ı je ovšem podstatně širš́ı než v mysql. -klasické řešenı́ výčtu create table foo(varianta char(2) check (varianta in (’a1’,’a2’,’a3’))); -pouziti typu enum create type vycet_variant as enum(’a1’,’a2’,’a3’,’a4’,’a5’); create table foo(varianta vycet_variant); -enum lze pouzit i v poli select ’{a1,a3}’::vycet_variant[] as pripustne_varianty; rozsah hodnot źıskáme voláńım funkce enum range. pokud funkci předáme parametr null, źıskáme úplný výčet hodnot. postgres=# select enum_range(’a2’::va, ’a4’::vycet_variant); enum_range -----------{a2,a3,a4} (1 row) postgres=# select enum_range(null::vycet_variant); enum_range -----------------{a1,a2,a3,a4,a5} (1 row) kromě opravy několika chyb v pl/pgsql (chyběla kontrola not null domén), došlo již k ńıže zmı́něnému rozš́ı̌reńı př́ıkazu return o tabulkový výraz, a konečně lze i v pl/pgsql použ́ıvat scrollable kurzory. ty postgresql podporuje deľśı dobu, z pl/pgsql je však nebylo možné použ́ıvat. kromě scrollable kurzor̊u lze v pl/pgsql (ale i vně) použ́ıvat updatable kurzory podle ansi sql 92 (oproti ansi sql 2003 př́ısněǰśı omezeńı). u srf funkćı můžeme upřesnit jejich náročnost a předpoklad počtu vrácených řádk̊u (atributy cost a rows). v předchoźıch verźıch se při hledáńı optimálńıho prováděćıho plánu předpokládalo, že srf funkce vrát́ı vždy 1000 řádk̊u, což nebyla pokaždé být pravda (výsledkem byl neoptimálńı prováděćı plán). vedleǰśım efektem implementace subsystému pro kešováńı prováděćıch plán̊u bylo odstraněńı problémů s neplatnými prováděćımi plány v pl/pgsql. tyto problémy se projevovaly hlavně při použit́ı dočasných tabulek, které se nesměly odstraňovat. jinak docházelo, při použit́ı sql geinformatics fce ctu 2007 94 postgresql 8.3 př́ıkazu vázaného na zrušenou a opětovně vytvořenou tabulku, k chybě. nyńı se cache čist́ı v závislosti na rušeńı databázových objekt̊u. samozřejmě, že funkce volané v cyklu budou prováděny efektivně jen tehdy, pokud nebude docházet k regenerováńı prováděćıch plán̊u. v 8.3 můžeme pole vytvářet i ze složených typ̊u – v podstatě můžeme uložit tabulku jako jednu hodnotu). stále však chyb́ı podpora domén (a vkládaný záznam je nutné explicitně typovat): postgres=# create type at as (a integer, b integer); create type postgres=# create table foo(a at[]); create table postgres=# insert into foo values(array[(10,20)::at]); insert 0 1 postgres=# insert into foo values(array[(10,20)::at, (20,30)::at]); insert 0 1 postgres=# select * from foo; a ----------------------{"(10,20)"} {"(10,20)","(20,30)"} (2 rows) postgres=# select a[1] from foo; a --------(10,20) (10,20) (2 rows) postgres=# select a[1].a from foo; a ---10 10 (2 rows) optimalizace ve verzi 8.3 došlo k celé řada změn a úprav, které by měly vést k rychleǰśımu zpracováńı sql př́ıkaz̊u. zrychlit by mělo nač́ıtáńı dat př́ıkazem copy. u tohoto př́ıkazu neńı žádný d̊uvod, proč by mělo docházet k zápisu do write ahead logu (ten je základem procesu obnovy po pádu postgresql) a tato verze dokáže u tohoto př́ıkazu obcházet zápis so wal (copy muśı být v transakci). nově postgresql efektivněji provád́ı dotazy s order by c limit n, kdy znatelně zrychĺı výběr prvńıch n řádk̊u řazených podle sloupce c, pokud nad sloupcem c neńı index (nedocháźı k seřazeńı celé tabulky). př́ıkaz explain analyze nyńı poskytuje daľśı informace o řazeńı (typ, spotřeba paměti): t=# explain analyze select * from foo order by a limit 12; query plan ---------------------------------------------------------------------------------------------limit (cost=3685.48..3685.51 rows=12 width=4) (actual time=290.549..290.588 rows=12 loops=1) -> sort (cost=3685.48..3935.48 rows=100000 width=4) (actual time=290.544..290.557 rows=12 loops=1) sort key: a sort method: top-n heapsort memory: 17kb -> seq scan on foo (cost=0.00..1393.00 rows=100000 width=4) (actual time=0.036..153.526 rows=100000 loops=1) total runtime: 290.658 ms (6 rows) t=# explain analyze select * from foo order by a; geinformatics fce ctu 2007 95 postgresql 8.3 query plan --------------------------------------------------------------------------------------------sort (cost=10894.82..11144.82 rows=100000 width=4) (actual time=520.528..683.190 rows=100000 loops=1) sort key: a sort method: external merge disk: 1552kb -> seq scan on foo (cost=0.00..1393.00 rows=100000 width=4) (actual time=0.022..159.028 rows=100000 loops=1) total runtime: 800.065 ms (5 rows) v předchoźıch verźıch neexistovala hash funkce pro typ numeric. proto se pro spojováńı tabulek skrz sloupce typu numeric nedala použ́ıt metoda hashjoin, která patř́ı k nejrychleǰśım. zrychlit by měla i operace like, zvlášt’ když se použije v́ıce bajtové kódováńı – použil se jiný algoritmus na porovnáńı řetězc̊u. nyńı se již neporovnávaj́ı znaky, ale bajty, což ušetř́ı jednu konverzi z utf8 do utf16. na zkušebńı tabulce o sta tiśıćıch žlutých końıch sekvenčńı čteńı tabulky se zrychlilo z 169ms na 105ms. pokud se zjist́ı, že docháźı k souběžnému sekvenčńımu čteńı jedné tabulky z v́ıce obslužných proces̊u, tak se systém pokuśı tyto procesy sesynchronizovat (pokud dojde k sekvenčńımu čteńı, tak se ve velké většině př́ıpad̊u čte celý datový soubor). sekvenčńı čteńı všech proces̊u je přibližně stejně rychlé, a tak je šance, že všechny procesy budou cht́ıt v jednu chv́ıli stejnou datovou stránku, a je mnohem vyšš́ı pravděpodobnost, že ji najdou ve vyrovnávaćı paměti. pokud nedocháźı k synchronizaci procesu, tak pravděpodobnost, že požadovaná stránka je ve vyrovnávaćı paměti je mnohem menš́ı, č́ımž se zvyšuje pravděpodobnost požadavku na fyzické čteńı datové stránky se souboru. tato optimalizace má smysl při větš́ım počtu současně pracuj́ıćıch uživatel̊u, kdy je vyšš́ı pravděpodobnost, že dojde k synchronizaci, a také kdy je větš́ı tlak na vyrovnávaćı pamět’. podpora asynchronńıho commitu je méně nebezpečnou obdobou nedoporučované konfigurace fsync=off. při asynchronńım commitu je zaručena konzistence databáze, nicméně při havárii hroźı riziko ztráty několika posledńıch transakćı. parametr synchronous commit je vázán na session, takže vývojář může na základě své úvahy zvolit méně bezpečný, nicméně rychleǰśı zp̊usob řešeńı transakćı. testy ukazuj́ı, že má smysl uvažovat o tomto parametru v př́ıpadě málo zat́ıžené databáze, kdy nedocháźı ke sd́ıleńı zápisu do transakčńıho logu – typicky při administraci databáze, kdy ostatńı uživatelé nemaj́ı př́ıstup k databázi a s databáźı pracuje pouze dba. 8.3 obsahuje sofistikovaněǰśı metodu pro vytvářeńı nových verźı označovanou jako hot (heap only tuples). starš́ı verze při jakékoliv operaci update modifikovaly relevantńı indexy, a to i přesto, že nedošlo k modifikaci oindexovaných sloupc̊u. tzv. horký update je podmı́něný dostatkem volného prostoru na datové stránce a změnou pouze neoindexovaných sloupc̊u. pokud tyto podmı́nky nejsou splněny, provede se klasický ”studený” update. hot update je v̊uči klasické implementaci operace mnohem úsporněǰśı a tud́ıž i rychleǰśı. nav́ıc tato nová metoda dokáže využ́ıt prostor na datové stránce obsazený nedostupnými verzemi (které byly vytvořeny také touto metodou) bez potřeby spuštěńı operace vacuum. geinformatics fce ctu 2007 96 postgresql 8.3 spuštěńı operace vacuum ve v́ıce procesech v 8.3 je automatické vacuováńı implementováno s podporou v́ıce proces̊u, tj. pokud trvá vacuum jedné databáze př́ılǐs dlouho, vytvoř́ı se nový pracovńı proces (worker), který zajǐst’uje vacuováńı daľśıch databáźı (smyslem neńı urychlit d́ıky paralelizaci operaci vacuum, ale zajistit, že v daném časovém okně se provede spravedlivě vacuum všech databáźı). úkolem pracovńıho procesu je projet provozńı statistiky všech tabulek v databázi a vybrat tabulky určené k vacuováńı. pracovńı proces je na úrovni databáze sekvenčńı, paralelizace je na úrovni clusteru. nově vacuum také zajǐst’uje samočinné voláńı vacuum freeze coby ochrany před přetečeńım rozsahu identifikátor̊u verźı řádk̊u. rozš́ı̌reńı pl/pgsql – return query, lokálńı systémové proměnné předchoźı verze neumožňovaly vrátit množinu záznamů jako výsledek srf funkce. jediným řešeńım bylo voláńı př́ıkazu return next pro každý řádek výsledku dotazu. v podstatě totéž (ale na nižš́ı úrovni, tud́ıž efektivněji) provád́ı př́ıkaz return query. jeho parametrem je sql dotaz, jehož výsledek se připoj́ı k výstupu. podobně jako return next neukončuje prováděńı funkce. create or replace function dirty_series(m integer, n integer) returns setof integer as $$ begin return query select * from generate_series(1,m) g(i) where i % n = 0; return; end; $$ language plpgsq; daľśı novou vlastnost́ı je možnost modifikovat systémové proměnné lokálně pro určitou funkci. podobně se chová t-sql nebo mysql, kde se ukládá aktuálńı nastaveńı systémových proměnných v čase registrace funkce. v postgresql žádný podobný mechanismus nebyl. tato vlastnost řeš́ı zabezpečeńı security definer funkćı, kterým bylo možné podvrhnout útočńık̊uv kód změnou systémové proměnné search path. zápis je zřejmý z následuj́ıćıho př́ıkladu: create function report_guc(text) returns text as $$ select current_setting($1) $$ language sql set regex_flavor = basic; alter function report_guc(text) reset search_path set regex_flavor = extended; podpora režimu warm standby, prototyp replikace založené na transakčńım logu postgresql 8.3 umožňuje nakonfigurovat a použ́ıvat dva postgresql servery tak, že prvńı slouž́ı jako výkonný server a druhý jako záložńı, kdy změny v datech na prvńım serveru jsou na druhý server replikovány exportem transakčńıho logu. tato konfigurace se použ́ıvá pouze v náročných aplikaćı, kde časté klasické zálohováńı z d̊uvodu objemu neńı možné a kde ztráta dat neńı akceptovatelná – kde zákazńık vyžaduje pr̊uběžné zálohováńı. nejde o multi-master replikaci jako v př́ıpadě mysql. druhý systém je až do signálu nedostupný, tud́ıž t́ımto zp̊usobem replikace nelze rozložit zátěž serveru. pro usnadněńı konfigurace verze 8.3 obsahuje př́ıkaz pg standby (ve stejnojmenném contrib adresáři), který zajist́ı udržeńı druhé instance postgresql v režimu warm standby. geinformatics fce ctu 2007 97 postgresql 8.3 master (postgresql.conf): archive_command = ’cp %p ../archive/%f’ archive_timeout = 20 warm standby (recovery.conf) restore_command = ’pg_standby -l -d -k 255 -r 2 -s 2 -w 0 -t /tmp/stop /usr/local/pgsql/archive %f %p \ 2>> standby.log’ po modifikaci konfiguračńıch soubor̊u stač́ı spustit oba servery. záložńı server z̊ustane v recovery režimu, kdy pg standby postupně podvrhuje segmenty transakčńıho logu (sleduje exportované segmenty) a nedovoĺı dokončit obnovu záložńı databáze. teprve po signalizaci, pg standby (existenćı předem určeného souboru (v př́ıkladu /tmp/stop)) oznámı́ serveru, že se jednalo o posledńı segment transakčńıho logu a dovoĺı dokončeńı obnovy a t́ım i přepnut́ı do stavu, kdy záložńı server je schopen přij́ımat požadavky. signálńı soubor muśı vygenerovat uživatel postgres, tak aby jej pg standby mohlo odstranit. pokud tento soubor nelze odstranit, replikace zhavaruje. v praxi toto řešeńı neńı př́ılǐs použitelné, nebot’ pg standby nedokáže zachytit výjimku a tud́ıž ji ani nedokáže korektně obsloužit (aniž by nebyl nevratně přerušen proces replikace spojený se ztrátou dat na záložńım serveru). 4916 ? s 0:00 /usr/local/pgsql/bin/postmaster -d /usr/local/pgsql2/data 4918 ? ss 0:00 postgres: startup process 5226 ? s 0:00 sh -c pg_standby -l -d -t /tmp/aaa /usr/local/pgsql/archive 000000010000000000000018 pg_xlog/recoveryxlog 2>> standby.log 5227 ? s 0:00 pg_standby -l -d -t /tmp/aaa /usr/local/pgsql/archive 000000010000000000000018 pg_xlog/recoveryxlog záložńı cluster muśı být klonem zálohovaného clusteru. muśı být vytvořen zkoṕırováńım adresáře databázového clusteru – nevytvář́ı se př́ıkazem initdb. regulárńı výrazy podpora reg. výraz̊u neńı v postgresql novinkou. v 8.3 se objevily nové funkce regexp matches a dvojice regexp split to array a regexp split to table. pro řadu úloh nyńı nemuśıme použ́ıvat plperl. v následuj́ıćım př́ıkladu je z xml dokumentu separován seznam identifikačńıch č́ısel, který je následně indexován a použit k vyhledáváńı. ke stejnému účelu by bylo možné použ́ıt i funkce podporuj́ıćı xpath výrazy. toto řešeńı je řádově rychleǰśı než intuitivńı (a velice pomalé) řešeńı s like. objednava_v_xml like ’%hledane_id%’ pole new.objednavka id produktu je aktualizované v triggeru: new.objednavka_id_produktu := array(select i[1] from regexp_matches(new.objednavka_v_xml,’(\\d+)’,’g’ r(i)); funkčně srovnatelný predikát výše uvedenému like je: objednavka_id_produktu @> array[hledane_id] ostatńı změny nezanedbatelného rozš́ı̌reńı se dočkalo prostřed́ı ecpg. vylepšuje podporu prepared statements, nab́ıźı auto prepare mód, pozicované proměnné. jedná o zásadńı změny – došlo ke změně verze z 4.4 na 6.0. jednou z prvńıch backport̊u z enterprisedb je debugger a profiler pl/pgsql. nová verze pgadminiii obsahuje grafické rozhrańı debuggeru, které je zpř́ıstupněno, pokud je v postgresql nainstalován plugin s debuggerem (ke stažeńı na pgfoundry). ve srovnáńı s moderńımi debuggery obsahuje pl/pgsql pouze základńı funkce. geinformatics fce ctu 2007 98 postgresql 8.3 pavel=# load ’$libdir/plugins/plugin_profiler’; load pavel=# set plpgsql.profiler_tablename = ’profilerstats’; set pavel=# select line_number, sourcecode, time_total, exec_count, func_oid::regproc from profilerstats order by 1; line_number | sourcecode | time_total | exec_count | func_oid -------------+-----------------------+------------+------------+---------0 | | 0 | 0 | x 1 | begin | 0 | 0 | x 2 | for i in 1..4 loop | 0.000315 | 1 | x 3 | return next i; | 9.8e-05 | 4 | x 4 | end loop; | 0 | 0 | x 5 | return; | 3e-06 | 1 | x (6 rows) index advisor index advisor je plugin plánovače dotaz̊u. jedná se o prototyp navržený tomem lanem za účelem demonstrace monitorovaćıho rozhrańı návrhu a optimalizace prováděćıch plán̊u. pokud se aktivuje, tak optimalizátor bere v úvahu, kromě existuj́ıćıch index̊u, hypotetické indexy vytvořené nad každým sloupcem: regression=# load ’/home/tgl/pgsql/advisor’; load regression=# explain select * from fooey order by unique2,unique1; query plan ---------------------------------------------------------------------------------------sort (cost=809.39..834.39 rows=10000 width=8) sort key: unique2, unique1 -> seq scan on fooey (cost=0.00..145.00 rows=10000 width=8) plan with hypothetical indexes: index scan using on fooey (cost=0.00..376.00 rows=10000 width=8) (6 rows) regression=# explain select * from fooey where unique2 in (1,2,3); query plan -----------------------------------------------------------------------------------seq scan on fooey (cost=0.00..182.50 rows=3 width=8) filter: (unique2 = any (’{1,2,3}’::integer[])) plan with hypothetical indexes: bitmap heap scan on fooey (cost=12.78..22.49 rows=3 width=8) recheck cond: (unique2 = any (’{1,2,3}’::integer[])) -> bitmap index scan on (cost=0.00..12.77 rows=3 width=0) index cond: (unique2 = any (’{1,2,3}’::integer[])) (8 rows) vývoj v následuj́ıćıch letech největš́ı slabinou postgresql je chyběj́ıćı podpora replikaci. v této oblasti nikoliv nezaslouženě dominuj́ı komerčńı systémy. dále postgresql nemá dořešenou internacionalizaci, tzv. collates, které nab́ıźı jak mysql, tak firebird. konečně třet́ı oblast́ı, kterou je nyńı třeba intenzivně se zabývat je podpora olap databáźı. nelze předpokládat, že by postgresql v brzké době podporoval olap databáze, nicméně existuj́ı určité indicie, že hlavńım tématem následuj́ıćı verze (8.4) bude podpora analytických a rekurzivńıch dotaz̊u. v deľśım geinformatics fce ctu 2007 99 postgresql 8.3 časovém horizontu je možné očekávat zařazeńı podpory zpracováńı tzv. proudových dat, jelikož platforma postgresql byla použita k vytvořeńı experimentálńıch prototyp̊u proudových databáźı a kromě toho, část týmu vývojář̊u se touto problematikou aktivně zabývá. odkazy 1. přehled vlastnost́ı jednotlivých verźı1 2. přehled plánovaných rozš́ı̌reńı v př́ı̌st́ı verzi2 1 http://developer.postgresql.org/index.php/feature matrix 2 http://developer.postgresql.org/index.php/todo:wishlistfor84 geinformatics fce ctu 2007 100 http://developer.postgresql.org/index.php/feature_matrix http://developer.postgresql.org/index.php/todo:wishlistfor84 geostatistical methods in r adéla volfová, martin šmejkal students of geoinformatics programme faculty of civil engineering czech technical university in prague abstract geostatistics is a scientific field which provides methods for processing spatial data. in our project, geostatistics is used as a tool for describing spatial continuity and making predictions of some natural phenomena. an open source statistical project called r is used for all calculations. listeners will be provided with a brief introduction to r and its geostatistical packages and basic principles of kriging and cokriging methods. heavy mathematical background is omitted due to its complexity. in the second part of the presentation, several examples are shown of how to make a prediction in the whole area of interest where observations were made in just a few points. results of these methods are compared. keywords: geostatistics, r, kriging, cokriging, spatial prediction, spatial data analysis 1. introduction spatial data, also known as geospatial data or shortly geodata, carry information of a natural phenomenon including a location. this location allows us to georeference the described phenomenon to a region on the earth. it is usually specified by coordinates such as longitude and latitude. by mapping spatial data, a data model is created. we will focus on a raster data model which provides value of the phenomena at each pixel of the area of interest. mapping is a very common process in sciences such as geology, biology, and ecology. geostatistics is a set of tools for predicting values in unsampled locations knowing spatial correlation between neighboring observations. making use of geostatistics requires difficult matrix computations briefly described in chapters ordinary kriging and multivariate geostatistics. in order to make our predictions easier, we are going to use methods from r geostatistical packages introduced in chapter r basics. the best known geostatistical prediction methods are called kriging (for univariate data set) and cokriging (for multivariate data set) — examples of their use are shown in chapter example of kriging and cokriging in r. 2. r basics there any many applications implementing geostatistical methods. most of them are complex gis and most of them are commercial. this does not hold for project r. r is a language and environment for any statistical computations and creating graphics. r is available as free software under the terms of the free software foundation’s gnu general public license. r is multi–platform, easy to learn, and with a huge amount of additional packages that extend its functionality. for instance, such packages serve for special branches of statistics such as geostatistics. geoinformatics fce ctu 8, 2012 29 volfová a., šmejkal m.: geostatistical methods in r 2.1. first steps in r after downloading and installing r, open the default r console editor, figure 1. a standard prompt > appears at the last line. when this prompt is shown, r is ready to accept a command. another prompt exists (+) which is for multiple row commands. there are two symbols for assigning a value to a variable: summary(v) min . : 0.00 1 st qu . : 81.75 median :100.50 mean : 97.50 3rd qu. : 1 1 6 . 2 5 max. :145.00 # variance > var = sum((v−97.5)^2)/ length (v) 689.69 # i n t e r q u a r t i l e range > iqr = 116.25−81.75 34.5 # c o e f f i c i e n t of skewness > cs = sum((v−97.5)^3)/ sqrt ( var )^3/ length (v) −0.771 # c o e f f i c i e n t of variation > cv = sqrt ( var )/97.5 0.269 the coefficient of skewness is, in this case, negative which means the distribution rises slowly from the left and the median is greater then the mean. the closer the coefficient of skewness to zero, the more symmetrical the distribution. hence the difference between median and geoinformatics fce ctu 8, 2012 41 volfová a., šmejkal m.: geostatistical methods in r mean is getting smaller. the coefficient of variation is quite low. if this value is greater then 1, a search of erroneous observations is recommended. 4.3. variogram in order to plot an empirical variogram, we need to set a proper distance for the lag (x–axis on the plot). when the lag is too small, the variogram would go up and down despite its theoretical increasing trend before the range distance and constant trend for distance larger than the range. when we set the lag too large, we gain just a small number of values (breaks) on the variogram curve and we would not see the important characteristics of the variogram such as range, sill etc.. in our example we set the lag for 10 m. a variogram cloud (all pairs of points) and an empirical variogram with given lag for the v variable is in figure 7. these variograms were created by variog function. the theoretical variogram is modeled with lines.variogram figure 7: variogram cloud and empirical variogram (lag = 10 m) of v . function or with an interactive tool eyefit. in our example in figure 8 we set the maximal distance to 100 m, the covariance model as exponential, the range to 25 m, the sill to 65000, and nugget to 34000. 4.4. analysis of multivariate data since we wish to take advantage of spatial dependency of a primary and a secondary variable, we need to analyze the data sets. the goal is to examine whether the covariates are dependent enough so the secondary variable can improve prediction of the primary variable. geoinformatics fce ctu 8, 2012 42 volfová a., šmejkal m.: geostatistical methods in r figure 8: theoretical variogram of v . the first thing we can try is to compare the shape of histograms. very similar shapes (i.e. similar distribution) indicates a certain degree of dependency. by using cor(u,v) function in r we can get a correlation coefficient (in this case 0.837). its value is always within the interval 〈−1, 1〉. the closer to zero, the less dependent the data sets are. in order to compare two distributions, we can visualize so–called q–q plot (qqplot function in r). each axis represents quantiles of one data set (see figure 9). if the plotted data are close to y = x line, the variables are strongly dependent. if the data make a straight line that has a different direction than y = x, the variables still have similar distribution but with different mean and variance. figure 9: q–q plot, straight line represents y = x, qqplot(v,u). another graphical tool for testing the dependency of two spatial data sets is so–called scatterplot. pairs made of primary variable value and secondary variable value at the same location are visualized as points in this plot. the result is a cloud of points (see figure 10 for our geoinformatics fce ctu 8, 2012 43 volfová a., šmejkal m.: geostatistical methods in r example on u and v data). the narrower the cloud, the higher the degree of dependency. a scatterplot has one another big advantage — outliers and measurement errors lie outside the cloud. we can then easily check these points and in case they are wrong we would take them out of the data set. the dependency of two variables can be approximated by linear figure 10: scatterplot, plot(v,u). regression given by y = ax + b. how to do this in r is shown in the following code. # method f o r l i n e a r r e g r e s s i o n model = lm(u~v) # plot plot (v,u, main=" scatterplot and l i n e a r r e g r e s s i o n ") abline ( model ) # model parameters summary( model ) the plot from the previous example is in figure 11. an alternative for a linear regression can be a graph of conditional expectation where one variable is divided into classes (such as when we create a histogram) and a mean of the other variable is calculated within these classes, see figure 12. since we explored the data sets, did basic geostatistical analysis and determined the spatial continuity and covariables dependence, we might proceed to prediction. from now on, we are going to use a new data set that is more suitable as an example for prediction by (co)kriging. 4.5. example of kriging and cokriging in r in the following part of this paper, we are going to make two predictions — one using only primary variable on its own and ordinary kriging method, and the other using secondary variable and ordinary cokriging method. we are going to compare these two methods using some graphical and tabular outputs. geoinformatics fce ctu 8, 2012 44 volfová a., šmejkal m.: geostatistical methods in r figure 11: linear regression u = 0.314v − 11.5. figure 12: conditional expectation of v within classes defined on u values. 4.6. data description the phenomena we use in this example are simulated random fields in a square region of size of 50 pixels (i.e. 2500 pixels/values in total). we randomly4 select some values and state them for measurement. after the prediction is made, we can easily compare the results with the original data set. this is not how it works in reality — we do not have values of the variable at each location of the region, that is why we do the prediction. however, for educational purposes, comparison of predicted and real values is a good way to show how these methods work and how well they work. simulation of gauss random fields was chosen to create our phenomena by method grf in r. this method is able to create a random raster which can represent continuous spatial phenomenon. gaussianity of the spatial random process is an assumption common for most standard applications in geostatistics. however non-gaussian data are often provided. how 4the layout of the samples is not random — we try to cover the whole region and arrange the samples in a grid. however, the samples are randomly chosen from a neighborhood of each node of the grid. geoinformatics fce ctu 8, 2012 45 volfová a., šmejkal m.: geostatistical methods in r to deal with this sort of data is decribed in detail in [9]. in our paper, two fields were created by function grf, each representing one variable (called a and b). a is our primary variable for which the prediction will be made. b is just an auxiliary variable for forming the secondary variable – c. c is strongly correlated with a, the correlation coefficient is about 0.93. all three fields are shown in figure 13. the r code of creating and plotting these three fields is following: l i b r a r y (geor) l i b r a r y ( gstat ) set . seed (1) # creates regular grid of 50 by 50 p i x e l s # the covariance parameters are sigma^2 ( p a r t i a l s i l l ) # and phi ( range parameter ) a = g rf (50^2 , grid ="reg " , cov . pars=c ( 1 , 0 . 2 5 ) ) # a l l values of a are non−negative a$data = ( a$data+abs (min( a$data ))) ∗ 100 set . seed (1) # covariance model i s set to matern # smoothness parameter kappa i s set 2.5 b = gr f (50^2 , grid ="reg " , cov . pars=c (1200 ,0.1) , cov . model="mat " , kappa =2.5) c = a c$data = a$data−b$data # a l l values of c are non−negative c$data = c$data+abs (min( c$data )) l i b r a r y ( f i e l d s ) img_a = xyz2img ( data . frame (a)) img_b = xyz2img ( data . frame (b)) img_c = xyz2img ( data . frame (c)) par (mfrow=c (2 ,2)) image . plot (img_a, col=t e r r a i n . c o l o r s (64) , main="a" , asp=1, bty="n " , xlab ="" , ylab ="") image . plot (img_b, col=t e r r a i n . c o l o r s (64) , main="b" , asp=1, bty="n " , xlab ="" , ylab ="") image . plot (img_c, col=t e r r a i n . c o l o r s (64) , main="c" , asp=1, bty="n " , xlab ="" , ylab ="") both, a and c, have normal distribution, and all values are non–negative for sake of easier presentation. the coordinates are in range 〈0, 1〉. the basic statistics are in table 2. the sample data set consist of 166 measured values of c and 63 values of a. the primary variable fully overlaps the samples of the secondary variable and the secondary variable sample geoinformatics fce ctu 8, 2012 46 volfová a., šmejkal m.: geostatistical methods in r figure 13: simulated random variables. grid is much more dense (see later in figure 18). let us have a look at some analyzing graphical tools — histograms of samples are shown in figure 14, q–q plots are shown in figure 15, and a scatterplot is shown in figure 16. according to these plots, we can conclude that the samples have normal distribution and the distributions are quite similar which confirms the strong correlation of the variables. 4.7. prediction using ordinary kriging use of ordinary kriging in r is very simple. once we determined the theoretical variogram we can proceed to the prediction. see the following code: # create a grid f o r the prediction gr = data . frame ( coord1=a$coords [ , " x " ] , coord2=a$coords [ , " y " ] ) gridded ( gr ) = ~coord1+coord2 # assign coordinates to variable a geoinformatics fce ctu 8, 2012 47 volfová a., šmejkal m.: geostatistical methods in r variable number of values minimum median mean maximum a (primary) 2500 0.0 289.8 286.1 517.0 c (secondary) 2500 0.0 342.8 342.6 590.5 table 2: basic statistics of primary and secondary variable. figure 14: histograms of a (left) and c (right). figure 15: q–q plots of a (left) and c (right). coordinates (dataframea) = ~coord1+coord2 # variogram model vm = variogram ( data ~1 ,dataframea) vm. f i t = f i t . variogram (vm, vgm(6500 , "sph " , 0.3 , 50)) # prediction using ordinary kriging ok_a = krige ( data ~1 ,dataframea , gr ,vm. f i t ) geoinformatics fce ctu 8, 2012 48 volfová a., šmejkal m.: geostatistical methods in r figure 16: scatterplot of a a c. this is all we need to do to get prediction in unsampled locations when input is only the primary variable a. the results are shown in figures 18 and 19. let us have a look how the process changes when we wish to include the secondary variable. 4.8. prediction using ordinary cokriging a detailed description of how to process ordinary cokriging prediction in r is decribed in [3]. we already concluded that the variables a and c are spatially dependent. the most difficult step in prediction by ordinary cokriging is to set a linear model of coregionalization (in other words, to describe the spatial dependence between the covariables). we need to fit the samples into proper variogram and cross–variogram models. follow the example in the code below: # create a gstat object g # ( necessary f or correct use in following methods ) # v a r i b l e s a and c are saved in c l a s s data . frame # add a and c to object g g <− gstat (null, id = "a" , form = data ~ 1 , data=dataframea) g <− gstat (g , id = "c" , form = data ~ 1 , data=dataframec ) # empirical variogram and cross−variogram v . cross <− variogram ( g ) plot (v . cross , pl=t) # add variogram to object g # vma_fit i s previously created variogram model g <− gstat (g , id = "a" , model = vma_fit , f i l l . a l l=t) #create l i n e a r model of c o r e g i o n a l i z a t i o n g <− f i t . lmc (v . cross , g ) geoinformatics fce ctu 8, 2012 49 volfová a., šmejkal m.: geostatistical methods in r plot ( variogram ( g ) , model=g$model ) the model of coregionalization is shown in figure 17. the upper figure is variogram of samples of a. the empirical variogram does not look good due to small number of input samples. look at the improvement of variogram for c (lower right) where the number of samples is about three times larger. the lower left figure is the pseudo cross–variogram. the covariance model is identical (spherical in this case) for all three variograms, as well as the range was maintained (about 0.3). this means that the covariables behave similarly in space — they show the same degree of dependence for given distance. since we gained linear model of coregionalization, we can proceed to prediction using ordinary cokriging. figure 17: variogram and pseudo cross–variogram of a and c. the prediction step in r is actually very simple. it is literally a single command of method predict.gstat method. this method distinguishes (based on input data) what prediction method to use. there are actually two predictions made. one for our primary variable and one for the secondary one, because the method does not make a difference between those variables (i.e. we never specify which one is the primary one). # gr i s the prediction grid ck <− predict . gstat (g , gr ) comparisons of some statistics are listed in table 3. the contribution of c variable to the prediction of a is obvious. the extreme values got closer to real extreme values of a. the same holds for the median and mean. values of variation of prediction got significantly lower. geoinformatics fce ctu 8, 2012 50 volfová a., šmejkal m.: geostatistical methods in r data min. med. mean max. mean of max.var. var.pred. of pred. a real 0.0 289.8 286.1 517.0 – – ok a 98.2 277.4 280.8 482.4 2804 5329 ck a, c 42.3 279.6 281.0 482.4 1617 3293 data min. mean max. med. of med. of rmse diff. diff. diff. diff. abs(diff.) ok a -153.5 -5.1 179.1 -4.1 32.2 49.1 ck a, c -177.0 -5.1 166.4 -3.0 30.8 46.8 table 3: comparison of ordinary kriging (ok) a ordinary cokriging (ck). rmse stands for root mean square error: rmse = √√√√ 1 n n∑ i=1 (z∗(xi) −z(xi))2. the best result presentation is visualization of the predictions (figure 18) and the prediction errors (figure 19). it is obvious that the cokriging prediction describes the regions with extreme values more precisely. however, we can see that the kriging prediction did a good job too. it is thanks to relatively sufficient number of samples and (more importantly) their proper layout. it is only on us to decide whether this prediction is accurate enough or not. if not, we need to provide the prediction with samples of another variable that is highly correlated with the primary one and that has more dense sampling. the question is whether the improvement is worth the cost of the secondary variable data set. let us pay attention to the errors figure, particularly on the middle map with real errors. we can see that in case of ordinary cokriging a red cloud of errors appeared in the middle. this is a somewhat negative impact of the c samples. let us recall that the c variable is derived not only from a but also from b variable (figure 13) that has a large region of negative values exactly in the place where the red cloud of errors appeared. this region effected the c samples as well as the final prediction of a. this may have a dangerous impact on the prediction when using a secondary variable. this is why the degree of dependency of the covariables has to be really high. 5. conclusion both methods, ordinary kriging and ordinary cokriging, were shown to lead to a successful prediction. as we expected, the gain of the secondary variable was obvious. however, we always need to consider the cost of obtaining it and a the quality of the prediction without it. we did much more combinations of covariables during this project that were not mentioned in the paper. we worked with yet another variable that was not so correlated to the primary one. the results in that case were not good which we expected. we tried different sample layouts for primary and secondary variable. the biggest gain in prediction was achieved when the primary data set was so sparse that prediction by ordinary kriging was almost impossible to process (we cannot create the variogram). by adding the secondary variable, the prediction gave us quite decent results. we also tried to use the same primary variable as in this paper and the secondary variable just with the difference in sample locations — they did not overlap with the primary variable samples (their count was still about three geoinformatics fce ctu 8, 2012 51 volfová a., šmejkal m.: geostatistical methods in r figure 18: ordinary kriging and ordinary cokriging for a and c (left upper – real values of a, right upper – samples (red – a, blue – c), left lower – ordinary kriging, right lower – ordinary cokriging). times higher than number of samples for primary variable). this is the case where we cannot tell how good the spatial dependency of the covariables is and so it is harder to create the linear model of coregionalization. results of such prediction were not that good as in the case presented in this paper, however we still managed to enhance the prediction of the primary variable. this paper was originally made for educational purposes. it shows how to do basic spatial data analysis and how to predict values of some phenomenon in unsampled locations. two methods were described — ordinary kriging and ordinary cokriging. readers of this paper were provided with a step–by–step prediction process in r environment. acknowledgments the project was supported by grant sgs11/003/ohk1/1t/11. many thanks belong to prof. dr. jürgen pilz who became a great inspiration leading to including geostatistics and project r into the geoinformatics programme at the czech technical university. geoinformatics fce ctu 8, 2012 52 volfová a., šmejkal m.: geostatistical methods in r figure 19: prediction errors. upper row for ordinary kriging, lower row for ordinary cokriging; left: variation of prediction, middle: real estimation errors, right: absolute values of estimation errors (circle – a, plus – c). references [1] wackernagel, h. (2003): multivariate geostatistics. 3rd edition. springer, germany. [2] isaaks, e. h.; srivastava, r. m. (1989): applied geostatistics. oxford university press, new york. [3] rossiter, d. g.: co-kriging with the gstat package of the r environment for statistical computing. web: http://www.itc.nl/ rossiter/teach/r/r ck.pdf. [4] cran task view: analysis of spatial data. web: http://cran.r-project.org/web/ views/spatial.html. [5] the comprehensive r archive network. web: http://cran.r-project.org. [6] cressie, n. (1993): statistics for spatial data. wiley interscience. [7] hengl, t.: a practical guide to geostatistical mapping. 2nd edition. office for official publications of the european communities, luxembourg. web: http://spatialanalyst.net/book/. [8] diggle, p. j.; riberio, p. j. jr. (2007): model–based geostatistics. springer. [9] pilz, j. (ed.) (2009): interfacing geostatistics and gis. paper: bayesian trans-gaussian kriging with log-log transformed skew data by spöck g., kazianka h., and pilz j.. springer. geoinformatics fce ctu 8, 2012 53 http://www.itc.nl/~rossiter/teach/r/r_ck.pdf http://cran.r-project.org/web/views/spatial.html http://cran.r-project.org/web/views/spatial.html http://cran.r-project.org http://spatial-analyst.net/book/ http://spatial-analyst.net/book/ geoinformatics fce ctu 8, 2012 54 horizontal comparator for the system calibration of digital levels — realization at the faculty of civil engineering, ctu prague and in the laboratory of the department of survey and mapping malaysia (jupem) in kuala lumpur zdeněk vyskočil and zdeněk lukeš department of geomatics, czech technical university in prague thákurova 7, 166 29 prague 6, czech republic zdevyskoc@centrum.cz zdenek.lukes@fsv.cvut.cz keywords: digital level, geodesy, leveling staff calibration, metrology, surveying abstract metrological procedures require a leveling staff calibration for an estimation of a true staff scale. the calibration process is usually realized on laboratory comparators. two automatic comparators for digital level calibration were built by the staff of department of geomatics. this article brings some information about properties of developed systems and about a control software for the comparators. 1. introduction in the year 2009 the horizontal comparator was modernized in the metrological laboratory in kuala lumpur in the frame of cooperation between department of advanced geodesy (now department of geomatics) at the czech technical university (ctu) and department of surveying and mapping malaysia (jupem). new arrangements allow automatic calibration of digital levels. the model solution for this innovation was rebuilding of the comparator in laboratory of ctu prague where the functional prototype of this system was designed in the way of rebuilding of older calibration bench. this kind of calibration facility used for system calibration of digital levels can be found in geodetic or metrological laboratories in the world in two varieties. the technology of system calibration of digital levels is generally known and several universities and institutes are using comparators to process calibration. comparator for system calibration items of this type serve generally to leveling staff calibration or for calibration of digital levels [1]. calibration means here the true length of staff meter estimation. calibration can be realized as optical aiming at graduation respectively at bar code edges and comparison geoinformatics fce ctu 14(2), 2015, doi:10.14311/gi.14.2.6 55 http://orcid.org/0000-0002-6505-9068 http://orcid.org/0000-0003-2542-9623 http://dx.doi.org/10.14311/gi.14.2.6 http://creativecommons.org/licenses/by/4.0/ z. vyskočil & z. lukeš: comparator for system calibration of digital levels with length value measured by a laser interferometer. the second variant is using of digital level height reading. staff interval is read by the digital level and compared with the value realized by interferometer. figure 1 shows the system scheme. the first method gives a higher precision of staff scale detection, but this method is time demanding and it is very difficult to capture a possible irregularity of the staff scale. in opposite to it, the second method can be easily automatized thanks to the availability of digital levels remote control and an automatic reading. next advantage of this system is the recent determination of reading precision of level (but without influence of compensator function ability). this process of calibration is associating staff calibration and currently instrument testing and in literature it is called “system calibration”. during the system calibration process level height reading is changed through staff shifting and this change is compared with a length variance measured by the interferometer. it provides a height deviation as output, which can be processed by linear regression and provides staff scale. comparator for system calibration allows change of position toward the line of sight of the level, which is stably emplaced on a pillar. comparators are mostly vertical, allowing measurement at the vertical staff, which is shifted in vertical direction. the second variety is horizontal comparator (see the scheme below). the staff lies horizontally on a chariot. vertical picture of a bar code is displayed in the mirror tilted by 45 deg to levels line of sight and to staff normal. the advantage of the first variant is that it allows installing level in every distance. the disadvantage is constructional, a large vertical space is needed for an installation (twice length of the staff). in the case of horizontal comparator, real disadvantage is a limited distance of level and the staff given by mirror dimensions. figure 1: scheme of horizontal comparator for system calibration geoinformatics fce ctu 14(2), 2015 56 z. vyskočil & z. lukeš: comparator for system calibration of digital levels 2. comparator at the department of geomatics submission the old comparator carl zeiss jena was used as a fundament of the automatic comparator, see fig.2. it was originally assigned for the precise machine measurement and is consisting of a precise straight traverse bench. this comparator was so far used just for an optical calibration of level staffs and other length scales. the crucial modification lies in the change of a configuration of reading apparatus and the staff. during the optical calibration, the staff is fixed and a microscope (or another reading item) is moving on the traverse bench and its shift is controlled by an interferometer. in the system calibration improvement there is a reading item (digital level) stabile fixed and the staff is moved on the bench. this is necessary because of the distance fixing between the staff and the level that means keeping of a bar code image focusing. renovation of the comparator is implemented in the way that all new devices can be easily removed and it is possible to use the current comparator in a primary configuration. the position of the bench in laboratory allows measurement with the level in the distance interval from 2.5m to 6m in the case of positioning of the instrument in the next room and thanks to a through view in the wall. realization the bench is six meters long what is the efficient length for the purpose of calibration. comparator is positioned plumb to the wall in the distance of about 0.3 m from the bench edge to the wall. this is a limiting calibration interval, which is now just 2.5 m. this year it is planned to enlarge the space around the comparator and to extend the calibration interval in the length of the staff. the mirror is fixed in the centre of the comparator, it hangs on the solid console, which is constructed from rectangle hard aluminum profiles 100 x 30 x 3 [mm]. the mirror holder allows to set a station around three coordinate axes and also precise petting its tilt 45 degree to horizontal line of a sight of the level and to the normal of the horizontal positioned staff. the staff is positioned on the holder and shored up by three screws which allow to set up the horizontation. shifting of the staff is realized by the step motor which drives a ball screw in 10:1 gear. the ball screw is 4500 mm long and it has 40 mm diameter. the gearing and step dividing of the step motor allows precise positioning with the resolution about 10 µm but on the other side limits a maximal velocity of shifting. the laser interferometer renishaw ml10 gold is used as a length standard for this comparator. this instrument is connected to pc via usb interface. standard error of determined staff interval is given by the manufacturer as 1 ppm. the system is controlled by the original program developed in the department of geomatics and it is described in the last paragraph. the staff calibration in the interval of 2 m takes at this facility about 30 minutes in the step of calibration 20 mm. the interval and the step of calibration is configurable at will. 3. realization for jupem submission the submitter of the action, jupem, required a creation of modern system for automatic geoinformatics fce ctu 14(2), 2015 57 z. vyskočil & z. lukeš: comparator for system calibration of digital levels figure 2: comparator in laboratory of the department of geomatics. system calibration of digital levels. it was negotiated with the submitter that the automatic system will be built on the current platform of horizontal comparator. it was realized in metrological laboratory in 80’s by an expert group from japanese gsi, see fig. 3. realization the comparator consists of a huge steel bench from h-profile of dimension 600 x 400 x 15 mm, the bench is 6 m long. in precise milled cuttings on the upper side of the bench there are two rails fixed of linear shifting. the chariot with adjustable staff holder is moving on the rails. in the middle of the comparator bench there is a suspension console. in the origin modification there was a light holder and a reading microscope at the console. the microscope was removed by a digital camera and the holder was modified for the mirror positioning and fixing. the camera is here for the calibration of optical (regularly graduated level staff). as in the case of institute of geomatics comparator, here is the interferometer (on the solid pillar) and level on the opposite sites of the bench. for the level position there is a steel pillar situated 4 m from the middle of the comparator. after the negotiation with the submitter it was decided to build up automation items in the same way as they were done in the case of the horizontal comparator in the institute of geomatics (ctu prague) laboratory. montage of the components (except for some details) was accomplished as it’s described in the paragraph “the comparator at the institute of geomatics – realization”. geoinformatics fce ctu 14(2), 2015 58 z. vyskočil & z. lukeš: comparator for system calibration of digital levels figure 3: comparator in laboratory of jupem. the brand new laser interferometer lms from limtek blansko/czech republic was bought for jupem laboratory. the instrument fills required parameters: open communication protocol, ethernet communication interface, delivery of just required components. costs for the instrument are on the same level as asked competitors renishaw and agilent. atmosphere control sensors connected via wi-fi to the control unit of interferometer are optional accessories of the instrument. for the staff shifting the well-tried step motor microcon is used. is is operated by (via serial port easily programmable) control unit. the vendor also supplies compatible right angle worm-gears, the smallest rate 10:1 was used. in consideration of preparation heftiness there were all mechanical parts of innovation prepared in advance in the czech republic and sent to jupem. the fitting of components, the montage and the adjustment took nine workdays. in regard of robust material used for the body of the comparator (steel) and new items (hard –aluminum) are all parts connected with screws in coils or with nuts. software the original program working in linux os was developed for the calibration system, see fig.4. this software connects staff shifting control (by force of step motor), laser interferometer measurement and digital level measurement. in the time of development of the program there were available communication protocol just from manufacturers leica and trimble, so it was so far possible to work with these instruments: wild na2000/na3003, leica dna03/10, trimble zeiss dini11/12/12d. in the program there is possible to accomplish single measurement – staff shifting to required position, staff reading, interferometer and athmospheric sensors measurements. the program above all allows execution of full automatic calibration after setting of input parameters. these are interval of calibration (interval of staff reading), step of calibration and number of measurement repeating. the computed staff scale and the calibration protocol are outputs. measured data, deviations and an actual system scale are displayed during the measurement in the table. individual calibrations are organized into projects (corresponding to pair instrument – staff) and subordinate relations (sessions). this cascading allows easy comparing of results by geoinformatics fce ctu 14(2), 2015 59 z. vyskočil & z. lukeš: comparator for system calibration of digital levels repeated calibrations. figure 4: window of control program. 4. conclusion both introduced systems are, in the sense of design, similar. in the time of usage (six years) of the comparator in jupem there occurred no defect (term of guarantee was negotiated to one year), employees of the office were theoretically and practically trained and they manage the work with comparator without any problems. in the article, there was noticed that this year there are planned some constructive modifications of the laboratory of the institute of geomatics. these changes allow calibration of leveling staffs in their whole length. our next goal is to complete the calibration basement of possibility of separate (full automatic as well) calibration of scale by aiming at staff graduation (bar code edges) with a digital microscope. the final goal is the separate calibration of the staff and the level. at the same time, there is 24 meters long horizontal comparator for electronic distance meters (edm) calibration developed in the same laboratory. automatic system of edm will be described in some next paper. geoinformatics fce ctu 14(2), 2015 60 z. vyskočil & z. lukeš: comparator for system calibration of digital levels references [1] helmut woschitz and fritz k. brunner. “development of a vertical comparator for system calibration of digital levels”. in: österreichsiche zeitung für vermessung und geoinformation 91 (apr. 2003), pp. 68–76. geoinformatics fce ctu 14(2), 2015 61 the importance of computational geometry for digital cartography tomáš bayer faculty of science, charles university in prague bayertom@natur.cuni.cz keywords: computational geometry, digital cartography, open source, gis, automated generalization, convex hull abstract this paper describes the use of computational geometry concepts in the digital cartography. it presents an importance of 2d geometric structures, geometric operations and procedures for automated or semi automated simplification process. this article is focused on automated building simplification procedures, some techniques are illustrated and discussed. concrete examples with the requirements to the lowest time complexity, emphasis on the smallest area enclosing rectangle, convex hull or self intersection procedures, are given. presented results illustrate the relationship of digital cartography and computational geometry. introduction needs of human to capture and represent surrounding landscape are very old. the first evidences can be found on the walls of caves or animal horns; they are associated with the beginnings of the cartography. cartography is over two milenia old science, but during this period has been radically changed. adding mathematical fundamental and analytical methods to the process of data acquisition and mapping resulted in the birth of earth sciences. due to new knowledge in mathematics, physics, computational geometry, statistics and informatics the methods of creating maps have been rapidly modified and enforced (kolar at al, 2008). the transformation process of analogue maps to digital maps incurred as a result of cartographic representation of the earth based on planar structures (eg. points, lines, polygons) brought some new problems that can be effectively solved using computational geometry. in digital cartography are some new geometric structures like topological skeleton, voronoi diagrams, delaunay triangulation has been started to use. geinformatics fce ctu 2008 15 the importance of computational geometry for digital cartography from computational geometry to natural sciences the beginnings of the computational geometry arose as a response to data acquisition and data processing techniques changes at the 60th of 20 century. their transformation into digital form brought a new data representation of the landscape, based on its decomposition to 0d, 1d, 2d, 3d entities. the process of creating maps was associated with the lack of digital data analysis and synthesis. it led to the need of their processing with the least amount of manual interventions by an operator. a number of new techniques aimed to planar or spatial data analysis and their relationships has been created. those exact methods were based on linear algebra, geometry, cartography, statistic or adjustment calculus. based on synthesis of these findings, a new field “computational geometry” has been established. the computational geometry studies features of geometry algorithms in 2d or 3d and tries to find an optimal solution for geometry problems due to the time complexity. whereas there is a bigger amount of data we are able to process, it is necessary to solve problems effectively. due to the difference of the cartographic and informatic look to problems, this article tries to find unifying perspective emphasing importance of the computational geometry for the cartography education. in order to the czech republic does not become passive consumers of information technologies, it is necessary to invest in development of own geoinformatic problems solutions. this fact plays and important role and can not be underestimated in the long term perspective. the educational process must be adapted to those facts. it is not sufficient to focus only on practical solving of problems. based on an analysis, the student should be able to find optimal solution for the problem. in general terms, it is necessary to strengthen the teaching of natural sciences. in today´s highly over-technized world plays the ability of exact assessment of the problem an important role. it allows to reduce an inwardness of human decisionmaking and a dependency on ideologies. but this concept is in the discrepancy with the requirements to practical focus of higher education. it is possible to illustrate those problems on the educational process of computational geometry with the focus on interrelationships and interdisciplinary links. students would be able to feel the problem comprehensively and solve it much more effectively. computational geometry and building generalization a map represents an abstract expression of the reality. to maintain the basic characteristics of the cartographic products (dis passionateness, clearness, lucidity...), a controlled reduction of information must be performed. this process is called “generalization” and results in the simplification of the map content. generalization takes an important role in computer graphics, it allows to reduce the amount of information and shorten the visualization process. generalization is a subjective process with an accent to knowledge and experiences of the cartographer. computational geometry makes the process of simplification less dependent on a subjective view of a cartographer. an algoritmization of the simplification process is not unambiguous. it is not an easy task to find and set a geometric criterion, that is supposed to be satisfied by a simplified element. the simplification represents a process of more interdependent steps, an implementation of one step causes the next step. this geinformatics fce ctu 2008 16 the importance of computational geometry for digital cartography part of the article uses information and mathematical background from a new simplification algorithm proposed by the author. generalization factors. the are several important factors of the generalization that affect the results. they could be divided into four categories: map scale, the purpose of the map, characteristic of the territory, used cartographic symbols. geometric generalization of the building. the geometric generalization carries out a controlled reduction of the map content based on analysis of the geometric properties of the elements. it tries to remove those elements, that are not significant in the map context. some geometric structures like voronoi tessellation or delaunay triangulation can be used. automated or semi automated generalization of the building represents a problem solved in many ways. commonly used simplifying algorithms can not be applied, they do not maintain internal angles of the polygon edges (±π 2 ) representing the building, see fig. 1. building simplification has the constrain that makes this process more difficult. figure 1: building generalization without internal angles maintaining. requirements for the algorithm. a design of the algorithm with the reasonable time complexity (quadratic or better) providing appropriate cartographic results with minimizing the needs of manual corrections seems to be a hard problem. in addition, we have the following requirements for the simplification algorithm: � ability of building detection and simplification in any position, � self intersections removing, � ability to keep the area (equal area or near o equal area algorithm), � regulation of the simplification factor by user, � ability to simplify complex and non-convex shapes. in terms of computational geometry we explain more detailed the first and second points. scheme of the simplification process an automated or semi automated simplification of buildings based on the least square method is currently being solved in many ways. from the cartographic perspective it provides relatively good results. the simplification process can be shortly described using the following geinformatics fce ctu 2008 17 the importance of computational geometry for digital cartography steps: 1. detection of the angle of rotation ϕ of the building: � construction of the convex hull of a set of points, � construction of the smallest area enclosing rectangle of a set of points. 2. set rotation of the building: the angle of rotation −ϕ. 3. detection of the vertices and edges of the building based on the recursion: � calculation of the splitting criterion σ, � recursive decomposition of the edge to the set of new edges. 4. set rotation of the building: angle of rotation ϕ. in order to simplify mathematical calculations, generalized building is rotated by the angle of −ϕ. the building is rotated so that its edges are parallel to the axes of x, y. detection of the building rotation we will consider a non convex rectangular polygon in the plane to be a building. the building usually does not have to be oriented in the basic position, when all edges are parallel to axis x,y of the coordinate system. in general position the building is rotated, the rotation angle ϕ must be detected as a first step of the simplification algorithm. an accuracy of determining the angle of rotation ϕ significantly affects an effectiveness of the algorithm. the most common method of detecting the angle of rotation ϕ formed by x axis and the longer edge of the rectangle, is based on construction of the minimum bounding box (rectangle enclosing all points with the minimum area), and follows with the detection of the angle formed by the x axis and the longer edge of the rectangle, see fig. 2. whereas, the calculation is carried out over a large set of points, it is necessary to choose the procedure with the lowest time complexity. the procedure runs over non-convex polygon, this feature makes the process more complex. commonly available algorithms achieve quadratic time complexity o(n2) for this operation. using rotating calipers method published in [2] we can perform this step in linear time. this procedure is usable only for convex polygons, first step represents transformation of the non-convex polygon to convex hull. which method for the convex hull construction is the best to choose: jarvis scan, graham scan or quickhull? given the time complexity requirements as the best variant appears the graham scan. an interesting fact may be a comparison of the detected angle ϕ to street line angle constructed using the topological skeleton (eg straight skeleton). this technique is currently at the research stage. graham scan graham scan enables constructioning the convex hull in sub quadratic time with o(n · lg n) complexity. it assumes, there are no three collinear points in the set. algorithm is based on the idea of right turn. for each triplet pi,pi+1,pi+2, i ∈ 1, ..,n − 2, we analyze relative geinformatics fce ctu 2008 18 the importance of computational geometry for digital cartography figure 2: a detection of the building rotation using convex hull and the smallest area enclosing rectangle. position of pi+2 and the segment consisting of pi,pi+1 (left or right turn). let us denote −→u = pi −pi+1 and −→v = pi+1 −pi+2. right turn criterion we can write as∣∣∣∣ ux uyvx vy ∣∣∣∣v 5 0. the first step consists of finding a point q with extreme x coordinate (xmax). it follows with sorting of points according to the angle ω measured between ‖ x and q,p. when calculating the angle, it is necessary to determine ω at interval (0, 2π). notify, that computing angle ω from ω = arccos( (x2−x1)(x3−x2)+(y2−y1)(y3−y2)√ (x2−x1)2+(y2−y1)2 √ (x3−x2)2+(y3−y2)2 ) brings numerical troubles. sorting algorithm. the relationship computational geometry and informatics can be illustrated by a sorting algorithm. what algorithm seems to be appropriate for sorting the set of points because of the time complexity? given the fact, that the set of points forming a building is not too large, the choice of sorting algorithm does not play an important role. given the fact that sorting procedure could be repeated for the data made of thousands of buildings, it is efficient, in terms of overall approach to the problem, to use quicksort. the quicksort implementation is available in many programming languages as a standard sorting procedure. data structure and implementation. a concept of the data representation also plays an important role. one possible solution using the stack can be found in [3]. every point is represented by its unique identifying number, coordinates x,y, and flag illustrates the deletion of point from the hull. a correct definition of copy constructors and casting operators is important. look to the following source code sample: geinformatics fce ctu 2008 19 the importance of computational geometry for digital cartography class point { private: int num; bool del; double x,y; ... public: point::point (const point &point) { num=point.num; del=point.del; x=point.x; y=point.y; } bool point::operator < (const point &point) { return (ypoint.x)&&(y==point.y); }; bool point::operator == (const point &point) { return (x==point.x)&&(y==point.y); }; point point:: operator = (const point &point) { num=point.num; del=point.del; x=point.x; y=point.y; return *this; } ... } collinerity problem. the collinearity problem negatively affects the process of convex hull construction. collinear points have the same angle, how to sort those points? let us denote two colinear points pi,pj and si,sj euclidean distances from those points to the q. we define a new sorting rule: if (ωi = ωj) than closer point min(si,sj) is considered as earlier. coincident points represent a special case of the collinearity problem. for gis data this problem is not so important, they are topologically valid (it means also without duplicated points). smallest area enclosing rectangle problem of the smallest area enclosing rectangle construction was solved in many ways. presented solution described in [5] solves the problem in linear time using two calipers orthogonal to each other. the following procedure is called rotating calipers. the idea of construction is based on the repeated rotation of rectangle, the rectangle is gradually improved and becomes an approximation of smallest enclosing area in the next step. one edge of the smallest enclosing box must be collinear with one segment of the convex hull. let us denote ϕj,j ∈ 〈1, 4〉, four angles formed by the four smallest area enclosing box edges and four edges of the convex hull in points of contact vj. let v ′j represents a point, that is a successor of the point vj, and mj represents a vertex of the smallest area enclosing box. vertices of the smallest area (and thus edges) are clockwise oriented. geinformatics fce ctu 2008 20 the importance of computational geometry for digital cartography we find the minimum angle ϕmin = min(ϕj) and rotate the rectangle by an angle ϕmin. another edge of the rectangle becomes collinear with some segment of the convex hull. three points of contacts will not change. however one point vj, represented by the start point of the collinear segments, changes to its successor v ′j . we calculate an area s of the rectangle, compare it with a minimum area smin initialized during the first step to ∞. if s < smin, we store smin = s. repeat those steps until ∑ ϕmin < π 2 leads to result ∑ ϕmin = ϕ. due to the fact, that buildings are represented by rectangular polygons, more than one edge of rectangle with more segments of the convex hull. because of errors cumulation the numerical inaccuracy is the problem of presented algorithm. figure 3: problem of smallest area enclosing rectangle construction lead to inappropriate simplification. for our purposes it is sufficient to determine the angle ϕ with the accuracy of one degree, therefore we do not have to deal with this problem in more detail. for some specific shapes the smallest area enclosing rectangle do not have to be the best way how to detect the rotation of the building. this situation is typical for z or l segment, when the deviation between calculated angle ϕ and true value of the angle may be of up to several tens of degrees, see fig. 3. it is important to note, that the steps above represent only auxiliary geometric construction with a certain percentage of errors. the detection of self intersections during the process of the cartographic generalization we can be encountered with the problem of self intersections. they represent such situations, in which some undesirable forms as a results of the generalization process have been created. due to the topological incorrectness of such data, this error is very dangerous. closed “pseudoregion” is the result of crossing of two or more line segments. in the locus of the intersection there is no vertex inserted. using gis software this pseudoregion will be considered as topologically incorrect, see fig. 4. one of the possible solution may be a test, which verifies an existence of self intersections. before an edge removing or edge splitting procedure it is verified, whether this edge does not intersect any other edge of the building. if so, a procedure for the edge simplification will be geinformatics fce ctu 2008 21 the importance of computational geometry for digital cartography canceled. unfortunately, this step will contribute to a significant slowdown of the algorithm. how to perform the effective detection of self intersections with better than quadratic time complexity? bentley&ottman algorithm brings one of possible solutions. figure 4: problem of self intersection after the splitting procedure. bentley&ottman algorithm bentley&ottman algorithm, published in 1979, is able to find intersections of sets of lines with o(n lg n) time complexity. a brute force algorithm, based on checking of all possible intersection, is working only with the quadratic time complexity. bentley&ottman algorithm represents an application of the sweep line, moving over the lines from left to right. the sweep line parallel to y axis divides the set into processed part and unprocessed part. it calculates intersections only with those lines, that are cut by the sweep line. data structures. the proposed algorithm is an example of the use of the priority queue. the proposal of data structures plays an important role. the first data structure is represented by the priority queue, points are sorted according to x coordinate. information whether this point is a start point, an end point or an intersection, are stored for each point. if sweep line moves to point, an event is called. the second data structure, often represented by the tree, stores lines in the order in which they intersect the sweep line. lines intersection. finding the intersection of two lines is possible from parametric equations. using general equation for lines parallel to x bring problems. let us denote the first line l1 given by two points p1 = [x1,y1], p2 = [x2,y2], the second line l2 given by two pints p3 = [x3,y3], p4 = [x4,y4], and intersection q = [xq,yq]. parametric equation for the line we can write( xq yq ) = ( x1 y1 ) + s ( x2 −x1 y2 −y1 ) = ( x3 y3 ) + t ( x4 −x3 y4 −y3 ) , where s = y1(x3−x4)+y3(x4−x1)+y4(x1−x3) (x2−x1)(y3−y4)−(y2−y1)(x3−x4) , t = y1(x3−x2)+y2(x1−x3)+y3(x2−x1) (x2−x1)(y3−y4)−(y2−y1)(x3−x4) . for s ∈ (0, 1) ∩ t ∈ (0, 1) the intersection could be found from previous formulas. intersection of segments. the sweep line moves over the segments and stops at events of three types: (1) start point of the segment, (2) intersection point between two segments, (3) end point of the segment, see fig. 5. if the event point represents start point, we test segment against two neighbors along the sweep line. if the event point represents end point, point is removed from the list. if we found an intersection of those segments, it becomes a new event point. if the event point represents an intersection of two lines, we change their geinformatics fce ctu 2008 22 the importance of computational geometry for digital cartography order. each of both segments has adjacent segments along the sweep line, that must be tested for intersections. if the point represents an end point, adjacent segments are tested for intersection and event point is removed. bentley&ottman algorithm is based on assumption, that no segment is parallel to sweep line and no three segments pass through one point. figure 5: bentley&ottman algorithm with positions of sweep line. history of segments intersecting the sweep line is stored in balanced binary tree. this data structure is very efficient and enables update operations in o(lg(n)) time. so, it is apparent, that the implementation of bentley&ottman algorithm looks quite difficult, and uses a combination of several dynamic data structures. conclusion this paper presents the use of computational geometry in digital cartography. as an illustrative example the process of automated or semi automated building simplification was chosen, several examples were given and discussed. it was focused on the idea of possibility of more intensive computational geometry teaching. this article tries to find unifying perspective emphasis importance of the computational geometry for cartography education. not to become only passive consumers of information technologies, it is necessary to invest in the development of own geoinformatic solutions. this fact plays and important role and can not be, as mentioned above, underestimated in the long term perspective. references 1. de berg m., schwarzkopf o., kreveld m., overmars m.: computational geometry: algorithms and applications, 2000, springer-verlag. 2. dutter m.: generalization of buildings derived from high resolution remote sensing data, 2007. 3. rourke o. j.: computational geometry in c, 2005, cambridge university press. 4. sester m.: generalization based on least square adjustment, international archieves of photogrammetry and remote sensing, 2000. geinformatics fce ctu 2008 23 the importance of computational geometry for digital cartography 5. toussand g., solving geometric problems with the rotating calipers, mcgill university montreal, 1983 geinformatics fce ctu 2008 24 gal framework – current state of the project radek bartoň, martin hrubý faculty of information technology brno university of technology e-mail: xbarto33@stud.fit.vutbr.cz, hrubym@fit.vutbr.cz keywords: design, gis, grass, open source, library, dynamic language, remote procedure call abstract the gal (gis abstraction layer) framework is a component-architecture-oriented1 remote procedure call (rpc) library with implementations of gis-related subsystems communicating using the library and a set of demonstrational and testing tools utilizing that services. it doesn’t aim to be a full-featured solution for gis application construction but a proposal for possible incremental grass gis2 modernization. this article summarizes current state of the project, it’s history, application and potential and also presents options for further advancement and areas of possible participation. only a concern of other developers or users and the time may transform this idea into something practically usable. history and motivation the project was originated as an article author’s master degree diploma thesis at the faculty of information technology of the brno university of technology in february 2007. it was intended to be a higher-level abstraction layer above grass gis core libraries from the beginning allowing rapid and clear grass module development. it also allows sequential exchange of the current implementations with the new ones if used communication interfaces 1 http://trac.edgewall.org/wiki/tracdev/componentarchitecture 2 http://grass.osgeo.org/ geinformatics fce ctu 2008 5 http://trac.edgewall.org/wiki/tracdev/componentarchitecture http://grass.osgeo.org/ gal framework – current state of the project would be well-designed and preserved. this could help during possible grass gis innovation procedure. support of distributed computing and dynamic language facilitation was contemplated too. an initial stage of project realization was to design core communication mechanisms and lasted until july 2007 when the first steps to implement them was started. the library design was introduced on the last year’s volume3 of geoinformatics fce ctu workshop. further information about project creation motivation in consequence to grass’s internal organization was discussed there also. main development of the framework, including the design of introductory general-purpose, raster display and raster processing interfaces, was performed during the first half of year 2008 until the end of may when the project was presented in front of a diploma thesis commission. but the development did not stop since then and it may continue further if there will be enough of interest. current state the library is divided into several subsystems which are developed in parallel to allow implementation of certain features of example tools. these are mainly but not lastly a reimplementation of d.mon module functionality and a real-time 3d visualization tool called d.roamer similar to the nviz but with emphasis on interactivity. this paragraph will tell a few words about progress of each of the subsystems; designed interfaces and implemented modules are discussed in next paragraphs. generally can be said about these subsystems that grass’s libraries has been used in their implementation everywhere it was feasible but a possibility of their replacement with different implementations has always been kept in mind. core subsystem this part of library defines basic ways of communication between the components through the interfaces, abstracts used event processing libraries to a single event loop and provides a general model for rpc based subsystems such as a d-bus4 subsystem is. what do the ” component“ and the ” interface“ terms mean in context of the gal framework and what is the ” component architecture“ was explained the last year5 or can be found in this document6. the core subsystem is naturally the most evolved part of the framework. only things that should be done here are a proper event processing loop implementation since current one is quite naive and a user (module programmer) comfortance improvements which are not crucial in this stage of evolvement. exception subsystem 3 http://geoinformatics.fsv.cvut.cz/gwiki/gal framework 4 http://www.freedesktop.org/wiki/software/dbus 5 http://geoinformatics.fsv.cvut.cz/gwiki/gal framework 6 http://trac.edgewall.org/wiki/tracdev/componentarchitecture geinformatics fce ctu 2008 6 http://geoinformatics.fsv.cvut.cz/gwiki/gal_framework http://www.freedesktop.org/wiki/software/dbus http://geoinformatics.fsv.cvut.cz/gwiki/gal_framework http://trac.edgewall.org/wiki/tracdev/componentarchitecture gal framework – current state of the project it contains an exception objects’ class hierarchy so far. the exceptions are generally used as the only one mechanism for an error state signalization occured during the communication between the components. a local exception evocation and processing is provided natively by gcc but an exception passage through d-bus message bus is not working yet. d-bus subsystem the only one rpc communication implementation present is the d-bus subsystem. the d-bus library was chosen because of its simplicity and desktop systems orientation, but it’ll be probably replaced with an orbit2 implementation of a corba architecture in the future for its robustness. current implementation allows only single process act as a server which provides components with interface implementations. this have to be changed so that any number of processes will be accessible to any client module soon. general subsystem together with the core, the exception and the d-bus subsystems, general subsystem can be cut out and reused in any other project needing component architecture implementation, because it contains general purpose objects, interfaces and components. for example a command-line argument parsing and an environment variables management is located here. the subsystem is quite solid, only a module arguments documentation strings access has to be improved. this however doesn’t mean that it doesn’t need other extensions. if there will occur any new requirements for general functionality, their concretization may be inserted here. gis subsystem this subsystem should include all instruments to gis related computations. currently it has only information about active user and default region and their control. possible algorithms for a map projection or general gis data transformation are waiting for their introduction. raster subsystem it comprehends everything about raster data access, manipulation and conversion. raster architecture is designed so that data are accessed by tiles. request for tile contains desired dimensions, position and resolution of the tile in a layer region object. colour rules and a colour table for data presentation are associated with the returned tile similarly like in the grass. actual data storage is currently kept in grass’s competence using a grasslib library. a present design of the raster data representation is quite initiatory and and it needs an adequate degree of revision from the outside with proper modifications. hence any comments geinformatics fce ctu 2008 7 gal framework – current state of the project or suggestions would be positive and convenient contribution. if progress of the project allows practical usage of the library along with the grass, new implementations of the raster data storage may be added. some examples of data analysis modules should be implemented too. display subsystem raster data are passed to this part of the framework and displayed. a basic element of this process is a raster image object defined by its dimensions, number of channels and bit depth. first present component implementing raster data visualization emulates d.mon’s eight monitors but it uses qt 4.x for window management and opengl for rendering, second is a d.roamer’s module component which displays raster data as 3d scene with terrain. vector data display isn’t currently elaborated. dynamic languages bindings to allow easy development of modules written in scripting or dynamic languages, swig7 wrapper generator was employed. existing bindings are targeted to python and java. unfortunately, technical difficulties with dynamic and heterogeneous nature of the designed communication methodology leaded to many customizations of the wrapper and some limitations. for example a server-side module development in dynamic languages is for now impossible without using d-bus communication. this can be translated as: ” it is not possible to call python/java code from c++ code directly.“ possibility to write client-side modules, which is the main reasonable dynamic languages usage, is though available. designed and implemented interfaces although this article shouldn’t serve as the library reference, some important communication interfaces should be listed and explained here to get image about gal framework approaches. interfaces are actually designed as interface objects which holds an interface configuration state (list of available functions with their signatures, a way of communication, etc.) and which are imported to a module on demand from the gal core. inodecontroller – is basic interface for independent process management from outside. it’s mainly used internally, for example d.quit module calls process termination function of this interface. other functions will serve for communication negotiation. � irasterdisplayer – displays any raster image on a monitor. this can be tiles of raster layer or simply any raster image (legend, icon, etc.). � irasterlayerdisplayer – allows direct display of a raster layer on the monitor. this may help to reduce unnecessary computations for better performance and and lets a monitor handler to record a list of raster layer display requests. � irasterlayerprovider – gives tiled access to gis raster data. current implementation uses grass libraries for low-level data manipulation. 7 http://www.swig.org/ geinformatics fce ctu 2008 8 http://www.swig.org/ gal framework – current state of the project � ienvironmentprovider – provides different storages for global variables. present implementations are volatile memory, grass mapset configuration and grass global configuration storage. example tools a few modules known from the grass gis was developed to test and demonstrate functionality of designed and implemented interfaces. they are described here. g.gald, g.quit, g.list and g.gisenv some modules from a general category was rewritten as tests of the designed interfaces. they are a g.list and a g.gisenv. in addition, a g.gald and a g.quit modules was introduced. figure 1. shows example of their usage. first the g.gald module, which provides all available functionality implementation, is executed as a daemon. then the g.list is used to list raster layers of a mapset and the g.gisenv module displays defined environment variables. finally, the g.quit module terminates the running g.gald module. figure 1: some modules from general category. d.mon, d.move, d.resize and d.rast user interface of reimplemented d.mon module is shown on the figure 2. the d.mon module actually only gives order to show a monitor to the waiting g.gald process which performs own monitor window display. it is the same with d.rast module that reads raster data from grass and sends them to g.gald. other controlling modules the d.move and the d.resize tell the g.gald to move or resize the window. geinformatics fce ctu 2008 9 gal framework – current state of the project figure 2: d.mon module in action. d.roamer the last presented module is called a d.roamer and it allows the user to fly over a visualized terrain in real-time. it’s screenshots can be found on figure 3. and 4. the first shows the terrain rendered with full faces, the second uses wireframe. this demonstrates used level of detail algorithm called geo mip-mapping. figure 5. contains diagram of internal communication between d.roamer and d.rast modules using the framework. analogously as with the g.gald, d.mon and d.rast modules in previous paragraph, data are read form grassrasterlayerprovider component and pased through irasterlayerprovider and irasterdisplayer interfaces to d.roamer’s roamercomponent component. areas of future development as you may notice, vector subsystem is not present in the framework at all yet. the explanation is that it was not necessary to focus on so complex area as vector data architecture is for the prove of concept of proposed and designed communication strategy. hopefully, decent vector implementation will be result of bc. jan kittler’s master thesis whom the article author is cooperating with. he should design new internal and external representation of vectors and some analytical tools with user interface. core parts should be implemented in c++, analysis tools and user interface in c#. this will introduce need of c# bindings for gal framework. geinformatics fce ctu 2008 10 gal framework – current state of the project figure 3: d.roamer module interface with full-faced terrain. figure 4: d.roamer module interface with wireframe terrain. geinformatics fce ctu 2008 11 gal framework – current state of the project figure 5: architecture of d.roamer module. because of huge scale of project’s extent, another outside contribution would be more than welcomed. safe multi-thread processing of events in loop including thread-safe access to any internal data of the library may be elaborated. better raster architecture as long as any number of raster or vector data format implementations may be added. and finally, new modules using gal framework may be developed. bachelor or diploma theses on that themes could be published. some statistics � 20 months of development of single person. � 9000 code lines (according to http://www.ohloh.net/projects/9183/analyses/latest). � 6500 comment lines (mainly doxygen documentation). � c++ as main language, python and java bindings. � 41 commits to svn repository (svn://gal-framework.no-ip.org:3691). � depends on d-bus, libxml2, libgcj or libffi, qt 4.x, soterrain8 and grasslib libraries (some optionally). � homepage is trac instance at http://gal-framework.no-ip.org. 8 http://blackhex.no-ip.org/wiki/soterrain geinformatics fce ctu 2008 12 http://www.ohloh.net/projects/9183/analyses/latest http://blackhex.no-ip.org/wiki/soterrain http://gal-framework.no-ip.org gal framework – current state of the project references 1. christopher lenz, dave abrahams and christian boos. trac component architecture http://trac.edgewall.org/wiki/tracdev/componentarchitecture, july 2007. 2. radek bartoň and martin hrubý. gal framework. in proceedings of the workshop geoinformatics fce ctu 2007. czech technical university in prague, september 2007. 3. grass development team. grass gis. http://grass.itc.it. 4. freedesktop.org. d-bus. http://www.freedesktop.org/wiki/software/dbus. 5. swig. simplified wrapper and interface generator. http://www.swig.org. 6. radek bartoň. soterrain. http://blackhex.no-ip.org/wiki/soterrain, october 2007. geinformatics fce ctu 2008 13 http://trac.edgewall.org/wiki/tracdev/componentarchitecture http://grass.itc.it http://www.freedesktop.org/wiki/software/dbus http://www.swig.org http://blackhex.no-ip.org/wiki/soterrain geinformatics fce ctu 2008 14 implementation of a general web api for geoinformatics implementation of a general web application program interface for geoinformatics jan pytel department of mapping and cartography faculty of civil engineering, ctu in prague e-mail: pytel@fsv.cvut.cz keywords: java language, c++language, servlets, cgi, web applications abstract c++ language was used for creating web applications at the department of mapping and cartography for many years. plenty of projects started to be very large-scale and complicated to maintain. consequently, the traditional way of adding functionality to a web server which previously has been used (cgi programs) started being usefulness. i was looking for some solutions particularly open source ones. i have tried many languages (solutions) and finally i chose the java language and started writing servlets. using the java language (servlets) has significantly simplified the development of web applications. as a result, developing cycle was cut down. because of java jni (java native interface) it is still possible to use c++ libraries which we are using. the main goal of this article is to share my practical experiences with rewriting typical cgi web application and creating complex geoinformatic web application. introduction the modern era brings new phenomenon: world wide web, a term frequently used (incorrectly) when referring to “the internet”. it stands for the universe of hypertext servers (http servers), more commonly called ”web servers”, which are the servers that serve web pages to web browsers. a plain www page (html document) is static, which means that a text file doesn’t change for example: cv, research papers, etc. when someone would like create web pages that contain dynamic content a plain www pages are not sufficient: a solution is to create cgi programs with using languages like php, c++, perl, etc. php or c++ languages were used for creating web applications at the department of mapping and cartography for many years, some examples are: � internet access to the database of gps observation via www. the project is written in c++ language: http://www.vugtk.cz/gpsdb � online transformation between etrf and s-jtsk http://gama.fsv.cvut.cz/~kost/vypocty . plenty of projects started to be very large-scale and complicated to maintain. consequently, the traditional way of adding functionality to a web server which was used (cgi programs) has become unsustainable. i was looking for alternatives particularly open source ones. i have tried many languages (solutions) and finally i chose the java language and started geinformatics fce ctu 2006 44 http://www.vugtk.cz/gpsdb http://gama.fsv.cvut.cz/~kost/vypocty implementation of a general web api for geoinformatics writing servlets. this paper briefly introduces the servlet concept and explains how to create general web application program interface for geoinformatics. common gateway interface cgi the common gateway interface (cgi) programs are “normal” programs running on the server side — input data for cgi programs are requests from a client (data sent by web browser http header with other information). output from cgi program is sent back to the client (web browser). this concept means that client side does not need to take care which type of page is requested. side dynamic www pages and static www pages are transparent client sends a request with data and obtains www page. the cgi programs have to be placed in the special directory (usually /usr/lib/cgi-bin), the directory where system expects cgi programs. an example of dummy cgi program date (used script language bash) which returns current date in iso 8601 format ’year-month-day’: #!/bin/bash echo ’content-type: text/html’ echo echo ’’ echo ’’ echo ’current date’ echo ’’ echo ’’ echo ’

date:

’ echo ’
’ /bin/date -i echo ’
’ echo ’’ echo ’’ we simply copy the previous program/script into directory cgi-bin and set the file execution permission. no other steps are necessary. thus i have demonstrated that developing cgi programs is pretty easy. we can test the develop cgi program in terminal now: wget http://localhost/cgi-bin/date -o tmp-date && \ cat tmp-date && rm tmp-date ’ current date

date:

 2006-06-25 
geinformatics fce ctu 2006 45 implementation of a general web api for geoinformatics another example is cgi a program written in c++ language. the cgi program returns input (data sent by client without http header). from the following example it will be quite obvious that we have written ”normal” c++ program which reads data from input and returns text page: #include #include int main() { using namespace std; string s; cout << "content-type: text/html\n\n"; cout << " \n"; cout << "\n"; cout << "\n"; cout << "input data\n"; cout << "\n"; cout << "\n"; cout << "

input data:

\n"; cout << "
\n"; cin >> s; cout << s; cout << "
\n"; cout << "\n"; cout << "\n; } cgi programs can be written in many languages, for example php, c++, python, etc. creating cgi programs is pretty easy, but the cgi concept has some limitations: � each request is answered in a separate process by a separate instance of cgi program (cgi program needs to be loaded and started for each cgi request) � use database pooling or interaction between two cgi programs is problematic platform dependence � lack of scalability servlets a servlet is a java application that runs within a web server. servlets recieve and respond the requests from web clients. we have to use servlet container in order to run servlets. there are many of server containers available, we have chosen apache tomcat server container, discussed in the next section. geinformatics fce ctu 2006 46 implementation of a general web api for geoinformatics on sun pages (sun company is creator of servlets) we can read more precise explanation: “a servlet is a java programming language class used to extend the capabilities of servers that host applications accessed via a request-response programming model. although servlets can respond to any type of request, they are commonly used to extend the applications hosted by web servers. for such applications, java servlet technology defines http-specific servlet classes.” java servlet api [1] is a class library for servlets. java servlet api contains class httpservlet, which provides methods, such as doget and dopost methods for handling http services. in other words: when we would like create new servlet we have to create new class extends httpservlet class and override methods doget and dopost refer to next example. servlets have several advantages over cgi: � servlet does not run in a separate process, stays in memory between requests � there is only one single instance which answers all requests concurrently this saves memory and allows a servlet to easily manage persistent data � platform independence � java language has very rich libraries for working with http request, http responses, etc. an example of first servlet: package cz.examples; import java.io.*; import javax.servlet.*; import javax.servlet.http.*; public class helloworldservlet extends httpservlet { protected void doget(httpservletrequest req, httpservletresponse res) throws servletexception, ioexception { res.setcontenttype("text/html"); printwriter out = res.getwriter(); out.println(""); out.println("hello world."); out.println("hello world"); out.println(""); out.close(); } protected void dopost(httpservletrequest req, httpservletresponse res) throws servletexception, ioexception { doget(req,res); } geinformatics fce ctu 2006 47 implementation of a general web api for geoinformatics } when servlet helloworldservlet is requested a url inside web browser, sentence ”hello world” will appear. because there is only a single instance which answers all requests concurrently it means that we can easily manage persistent data. for example, we would like to know how many times servlet helloworldservlet was requested: package cz.examples; import java.io.*; import javax.servlet.*; import javax.servlet.http.*; public class helloworldrequestedservlet extends httpservlet { private int times = 0; protected void doget(httpservletrequest req, httpservletresponse res) throws servletexception, ioexception { res.setcontenttype("text/html"); printwriter out = res.getwriter(); out.println(""); out.println("hello world."); out.println("hello world!
this page was requested " + ++times + " times."); out.println(""); out.close(); } protected void dopost(httpservletrequest req, httpservletresponse res) throws servletexception, ioexception { doget(req,res); } } to achieve this behaviour using cgi programs, it would be quite complicated. java servlet api contains rich set of useful classes: one of those classes is httpservletrequest which provides request information for http servlets and contains many useful methods. next example is servlet numericalservlet, the servlet excepted two parameters argument1 and argument2 and returns the following results: package cz.examples; import java.io.*; import javax.servlet.*; import javax.servlet.http.*; geinformatics fce ctu 2006 48 implementation of a general web api for geoinformatics public class numericalservlet extends httpservlet { protected void doget(httpservletrequest req, httpservletresponse res) throws servletexception, ioexception { res.setcontenttype("text/html"); string argument1 = req.getparameter("argument1"); string argument2 = req.getparameter("argument2"); printwriter out = res.getwriter(); if ( ( (argument1 == null) || (argument2 == null) ) { out.println("error wrong arguments"); out.close(); return; } double arg1 = double.parsedouble(argument1); double arg2 = double.parsedouble(argument2); out.println(""); out.println("numericalservlet"); out.println(""); out.println("results:
"); out.println(arg1 + " + " + arg2 + " = " + (arg1 + arg2) + "
"); out.println(arg1 + " " + arg2 + " = " + (arg1 arg2) + "
"); out.println(arg1 + " * " + arg2 + " = " + (arg1 * arg2) + "
"); out.println(""); out.close(); } protected void dopost(httpservletrequest req, httpservletresponse res) throws servletexception, ioexception { doget(req,res); } } java servlet container — apache tomcat when working with servlets, we have to use servlet container in order to run servlets. there are many server containers available, we have chosen apache tomcat servlet/jsp container. this container is free of software and is released under apache software licence (http://www.apache.org/licences). first of all, we have to download binary distribution (jakarta-tomcat-5.0.28.tar.gz) from the site http://tomcat.apache.org/. it is supgeinformatics fce ctu 2006 49 http://www.apache.org/licences http://tomcat.apache.org/ implementation of a general web api for geoinformatics posed that we have installed java development kit (jdk) 1.2 or later platform. now we have to decide in which directory tomcat will be located (directory represents the root of tomcat installation) common directory is /opt where we will extract gzipped tarball of binary distribution: $mv jakarta-tomcat-5.0.28.tar.gz /opt/ $tar xvzf jakarta-tomcat-5.0.28.tar.gz now it is necessary to modify two files � /opt/jakarta-tomcat-5.0.28/conf/server.xml we modify an attribute port of element . the attribute port describes on which port tomcat will be running � /opt/jakarta-tomcat-5.0.28/bin/catalina.sh we have to set the java home environment variable to tell tomcat where to find java: set java home=/opt/jdk1.5.0 from now tomcat is prepared for running. before starting tomcat (by /opt/jakarta-tomcat-5.0.28/bin/startup.sh) we should deploy our applications. deploying application in order to be executed, a web application must be deployed on a servlet container. a web application contains servlets and it is defined as a hierarchy of directories and files in a standard layout. the top-level directory of the web application hierarchy is also the document root of web application. there are plain html files at this place. a web application has defined this hierarchy of directories: � *.html, *.htm the html pages, along with the other files, that must be visible to the client browser (stylesheet files, and images) � /web-inf/web.xml the web application deployment descriptor for the application. this is a xml file describing the servlets which make up an application, along with any initialization parameters. � /web-inf/classes/ this directory contains any java class files (and associated resources) required for the application, including both servlet and non-servlet classes. � /web-inf/lib/ this directory contains jar files that consist of java class files required for an application. in order to continue with our examples (servlets helloworldservlet, helloworldrequestedservlet, numericalservlet) we have to create a web application (named exampleservlets): � create directory /opt/jakarta-tomcat-5.0.28/webapps/exampleservlets, in this directory create the above mentioned hierarchy � compile (with program javac) all three servlets � copy .class files into /web-inf/classes/ directory (because the examples are in package cz.examples, copy .class files into /web-inf/classes/cz/examples/ directory) geinformatics fce ctu 2006 50 implementation of a general web api for geoinformatics � modify /web-inf/web.xml. as mentioned above, the /web-inf/web.xml file contains the web application deployment descriptor for a application. as the filename extension implies, this file is an xml document, and defines everything about the application that a server needs to know. the complete syntax and semantics for the deployment descriptor is defined in [2]. in our case web.xml looks like: example webapplication this is a simple webapplication for demonstrating. helloworldservlet cz.examples.helloworldservlet helloworldservlet /helloworldservlet helloworldrequestedservlet cz.examples.helloworldrequestedservlet helloworldrequestedservlet /helloworldrequestedservlet numericalservlet cz.examples.numericalservlet numericalservlet /numericalservlet now everything needed for running the web application is done. after starting up tomcat, we can use web browser and test our examples. we can find the servlets on the following addresses (port means port where tomcat is running): geinformatics fce ctu 2006 51 implementation of a general web api for geoinformatics http://localhost:port/exampleservlets/numericalservlet, http://localhost:port/exampleservlets/helloworldservlet, http://localhost:port/exampleservlets/helloworldrequestedservlet. general web application program interface for geoinformatics requirements for used technology during the development general web application program interface for geoinformatics were required following technology features: � the development under os gnu/linux with using free software [3] � platform independency � ability to use effectively existing source codes (most of them in c++ language) already finished projects � used mvc paradigm � database access, database connection pooling � using of sessions and session management java language and technology of servlets were chosen for development of the project. servlet technology satisfies all our requirements. this chapter contains all used technologies and share experiences with creating general web application program interface for geoinformatics (application with using servlet technology and mvc paradigm). used design patterns interface is fully object-oriented and using several design patterns (a design pattern is a general repeatable solution to a commonly-occurring problem in software design). for example we used patterns singleton, abstractfactory, observer and facade. one of the main goals was selection of object oriented model, which would separate computing core from presentation part (the way how the results will be displayed for the user on the screen). the programmer of computing core usually does not care for the layout of input and output data. the programmer usually only describe the expected data on the input side together with description of the results returned by program. ideal solution turned to be mvc software design pattern, i.e. design pattern on which this system is based on. mvc paradigm is a way of breaking application into three parts [4]: model represents of the information on which application operates (e.g. describe of database system, computing model of geodetic tasks, . . . ) geinformatics fce ctu 2006 52 http://localhost:port/exampleservlets/numericalservlet http://localhost:port/exampleservlets/helloworldservlet http://localhost:port/exampleservlets/helloworldrequestedservlet implementation of a general web api for geoinformatics view renders model into a form suitable for users, typically a user interface element. mvc is often seen in web application where the view is the html page and the code which gathers dynamic data for the page controller – responds to events, typically user actions, and invokes changes on the model and perhaps the view. template engine velocity the selection of template engine for component view was essential from the point of view of the creation application. the complete separation of the model, view respectively from the presentation part was the key requirement on the template engine. the developer only describes required input and output data (specifies names of variables representing input and results data – e.g. collections, strings, etc.). developers have chosen template engine velocity: http://jakarta.apache.org/velocity . template engine velocity is one of the parts of the jakarta’s project developed by apache software foundation and is released under the licence apache licence. the web pages of the project contain the following: the velocity user guide is intended to help page designers and content providers get acquainted with velocity and the syntax of its simple yet powerful scripting language, the velocity template language (vtl). many of the examples in this guide deal with using velocity to embed dynamic content in web sites, but all vtl examples are equally applicable to other pages and templates. velocity is a java-based template engine. it permits web page designers to reference methods defined in java code. web designers can work in parallel with java programmers to develop web sites according to the model-view-controller (mvc) model, meaning that web page designers can focus solely on creating a well-designed site, and programmers can focus solely on writing top-notch code. it is necessary to stick to the following pattern when using velocity: � initialization � creation of object context � fulfillment of the velocity context by data � selection of template � merging context and template into output file velocity context represents variable part of the page. from the programmer’s standpoint, velocity context is a map of objects. velocity template language (vlt) is very simple; for the description of the template language refer to: http://jakarta.apache.org/velocity/docs/user-guide.html . geinformatics fce ctu 2006 53 http://jakarta.apache.org/velocity http://jakarta.apache.org/velocity/docs/user-guide.html implementation of a general web api for geoinformatics the following text contains the extracts from the template language: #set( $foo = "velocity" ) hello $foo world! #if( $value < 0 ) value $value is negative #elseif( $value == 0 ) value $value is equal 0 #else value $value is positive #end

#foreach( $student in $university ) $student.nickname $student.surname $student.birthday
#end

computing task — from developer’s standpoint general web application program interface for geoinformatics allows adding arbitrary computing tasks. those tasks are accessible for the users using well-arranged menu. currently, it is possible to add the following three types of tasks into the system: � tasks written in java language, implementing interface computingtask � tasks written in arbitrary programming language, distributed as standalone executable programs – the system allows execution of those programs � tasks written in c++ language. the task written in java language is only class implementing interface computingtask: public interface computingtask { void setparameters(map input); map getresults(); outputstream getresultstream(); void compute(); boolean wascomputed(); } input data for the computing task are stored in map collection. the key representing name of the variable input has string type. as a result, it is necessary explicitly retype data to appropriate data types. the results may be returned in collection map, or in the class outputstream. the similar approach used in java language is applied also for the tasks written in c++ this case, jni code is used. final application is called manala and can be found on the following geinformatics fce ctu 2006 54 implementation of a general web api for geoinformatics web page http://gama.fsv.cvut.cz/manala. conclusion the java language (servlets) has significantly simplified the development of web applications. as a result, developing cycle was cut down. because of java jni (java native interface) it is still possible to use c++ libraries which are done. we have started using the java language particularly for web applications in 2005. we have rewritten many of our applications. switch development from ”typical cgi programming” to ”java servlet programming” is surprisingly easy with amazing benefit. development of general web application program interface for geoinformatics has brought completely new requirements on the web application program development. the current technology for development www applications used by the department of mapping and cartography turned to be insufficient. the development has been significantly improved by using servlet technology and framework mvc. as a result we developed application program interface for geoinformatics manala. references 1. java servlet api, http://java.sun.com/products/servlet/index.html 2. http://tomcat.apache.org/tomcat-5.5-doc/appdev/deployment.html 3. http://www.gnu.org 4. http://en.wikipedia.org/wiki/model_view_controller 5. common gateway interface, http://hoohoo.ncsa.uiuc.edu/cgi/intro.html 6. apache tomcat tutorial, http://www.coreservlets.com/apache-tomcat-tutorial 7. servlet essentias, http://www.novocode.com/doc/servlet-essentials/ 8. apache tomcat servlet container, http://jakarta.apache.org geinformatics fce ctu 2006 55 http://gama.fsv.cvut.cz/manala http://java.sun.com/products/servlet/index.html http://tomcat.apache.org/tomcat-5.5-doc/appdev/deployment.html http://www.gnu.org http://en.wikipedia.org/wiki/model_view_controller http://hoohoo.ncsa.uiuc.edu/cgi/intro.html http://www.coreservlets.com/apache-tomcat-tutorial http://www.novocode.com/doc/servlet-essentials/ http://jakarta.apache.org ________________________________________________________________________________ geoinformatics ctu fce 370 automated 3d-objectdocumentation on the base of an image set dipl.-inf. (fh) sebastian vetter1, dipl.-ing. gunnar siedler1 1fokus gmbh leipzig lauchstädter str. 20, 04229 leipzig, germany home@fokus-gmbh-leipzig.de keywords: 3d-objectdocumentation, textured surface model, orthophotos, image matching, point cloud abstract: digital stereo-photogrammetry allows users an automatic evaluation of the spatial dimension and the surface texture of objects. the integration of image analysis techniques simplifies the automation of evaluation of large image sets and offers a high accuracy [1] . due to the substantial similarities of stereoscopic image pairs, correlation techniques provide measurements of subpixel precision for corresponding image points. with the help of an automated point search algorithm in image sets identical points are used to associate pairs of images to stereo models and group them. the found identical points in all images are basis for calculation of the relative orientation of each stereo model as well as defining the relation of neighboured stereo models. by using proper filter strategies incorrect points are removed and the relative orientation of the stereo model can be made automatically. with the help of 3d-reference points or distances at the object or a defined distance of camera basis the stereo model is orientated absolute. an adapted expansionand matching algorithm offers the possibility to scan the object surface automatically. the result is a three dimensional point cloud; the scan resolution depends on image quality. with the integration of the iterative closest pointalgorithm (icp) these partial point clouds are fitted to a total point cloud. in this way, 3d-reference points are not necessary. with the help of the implemented triangulation algorithm a digital surface models (dsm) can be created. the texturing can be made automatically by the usage of the images that were used for scanning the object surface. it is possible to texture the surface model directly or to generate orthophotos automatically. by using of calibrated digital slr cameras with full frame sensor a high accuracy can be reached. a big advantage is the possibility to control the accuracy and quality of the 3d-objectdocumentation with the resolution of the images. the procedure described here is implemented in software metigo 3d. 1. image recording two digital reflex cameras canon eos 5d mark ii were used for 3d-evaluation on the base of stereo models in an image set at the shown object (castle of katzenstein). these were previously calibrated on two focal lengths (24mm und 50mm); (alternatively usable camera systems are nikon d700 or sony alpha 950 as digital reflex camera with full frame sensor). using a receiving rail (on a tripod), where both cameras have been attached, a set of stereo model was taken. additional three dimensional reference points at the object have been measured by tacheometer. the accuracy of evaluation can be influenced by image quality and image scale. another recording configuration for plastic objects can be the usage of a rotation plate and the recording of single images in suitable step size. figure 1: left: recording configuration at east wall and apse (castle of katzenstein); right: used recording rail with 2 reflex cameras ________________________________________________________________________________ geoinformatics ctu fce 371 2. automated model assignment, automated model orientation after creating a project in the software the evaluation accuracy and resolution are defined, and the images are loaded into the project. the inner orientation is established for every image by linking the images to the corresponding camera. control points are automatically detected by the evaluation software in the images (identical points at the object). on the base of these control points the arrangement of the images is analyzed, with the goal to detect suitable pairs of images or stereo models (parallel recording direction, a large region of overlap). the manual creation of stereo models is possible as an alternative. figure 2: left: six images with similar image part and automatically found control points; right: result of analyzing for one pair of images (shift vectors of control points) the calculation of relative orientation of both images of the stereo model is made on the base of the known control points. if there are not enough control points for model orientation, additional image coordinates could be detected automatically or measured by hand. by using proper filter strategies during calculation of relative orientation of the stereo model incorrect model coordinates are detected and removed. by additional measurement of reference points the absolute orientation of one stereo model into overall coordinate system is made. the absolute orientation can be made with the defined distance of camera basis alternatively. a relation graph, that describes the arrangement of the stereo models, is generated with the help of the existing control points. therefore, it is sufficient to make absolute orientation of one stereo model (of one set of related stereo models) by hand. all the other stereo models of the set are automatically orientated absolute. in the absence or insufficient relation of the stereo models, additional reference points on the object surface can be measured and the absolute orientation can be calculated manually. figure 3: absolute orientated stereo model (green image coordinate: used control point; blue image coordinate: filtered control point; magenta image coordinate: control point was found in only one image; yellow image coordinate: not measured reference point) ________________________________________________________________________________ geoinformatics ctu fce 372 3. automated generation of point cloud due to the substantial similarities of stereoscopic image pairs, correlation techniques provide measurements of subpixel precision for corresponding image points. in addition to the single-point measurement object surfaces can be scanned with appropriate expansion algorithms [2]. with consideration of the evaluation accuracy for every stereo model the right step size (point distance) for matching is determined in dependence of the images scale. with batch processing all existing stereo models can be “scanned”. for every matched point the error is calculated by photogrammetric spatial intersection and the point cloud is coloured according to these errors. incorrect points are filtered automatically by an adjustable treshold. figure 4: three partial point clouds in 3d window, coloring in dependence of point error 4. merge of partial point clouds to a total point cloud with the help of the integrated iterative closest point algorithm the partial point clouds are transformed by the identical control points and merged to a total point cloud. by usage of filter strategies the point cloud is thinned out to the evaluation accuracy in the overlapping areas, due to merge, these have a higher point density [3]. figure 5: merged point cloud; right: point colouring in dependence of origin (partial point cloud) ________________________________________________________________________________ geoinformatics ctu fce 373 5. automated evaluation of digital surface model with a triangulation algorithm [4] a digital surface model is generated by a point cloud. in a second step, after editing the surface model, the images are mapped on it. thus a three-dimensional digital documentation is possible. 6. unwinding / digital ortho projection for the projection of images onto a plane or another unwinding geometry, user coordinate systems can be defined related to overall coordinate system or with the help of a partial set of points (balancing plane). additional sectional profiles can be extracted and generalized from the existing point cloud or surface model. figure 6: 3d-surface model with image texture figure 7: left: 3d point cloud and plane of a user coordinate system (for ortho projection); right: image plane of east wall (ortho projection to plane) ________________________________________________________________________________ geoinformatics ctu fce 374 the ortho projection onto the unwinding geometry is made in a user defined image scale and image resolution on the base of the orientated images, the sectional profiles and the surface model. figure 8: left: orientated image of calotte with projection of a barrel; right: unwound image plan of calotte (ortho projection onto barrel) figure 9: unwound image plan of apse (ortho projection onto barrel) 7. summary the here shown east wall with apse was covered with 13 stereo models and round about 100 reference points were measured by a tacheometer. the orientation and evaluation of these stereo models and the shown result were made with the software metigo 3d. the final image processing and colour matching were made with adobe photoshop cs5. for the described evaluation procedure for all stereo models (from loading the images to the final assembly of the image ________________________________________________________________________________ geoinformatics ctu fce 375 plans) nearly 8 hours were required. additional nearly 8 hours calculation time was required during batch processing for image arrangement, generation of point clouds, triangulation and unwinding or ortho projection, and 4 hours were needed for image processing with adobe photoshop. the required time for evaluation generally depends on the amount of images, the resolution of images and the surface model and the existing computing power. the resulting orthophoto, processed in scale 1:10 and 400dpi image resolution, can be used as 2d mapping base for the documentation of the object. the digital surface model can be alternatively textured to use them for a 3d mapping. at the moment the 3d mapping is not used in restoring because of practical reasons like the amount of data, printing or data exchange with the final customer. parts of the here described evaluation steps were developed in a cooperation project with the society for the promotion of applied computer science and supported by: federal ministry of economics and technology on the basis of a decision by the german bundestag. 8. references [1] henze, f.; siedler, g.; vetter, s.: integration automatisierter verfahren der digitalen bildverarbeitung in einem stereoauswertesystem, 26. wissenschaftlich-technische jahrestagung der dgpf, berlin, 11.– 13.09.2006, band 15, s. 239 246 [2] vetter, s.: generierung digitaler oberflächenmodelle (dom) im bereich der architekturphotogrammetrie, diploma thesis (unpublished), htwk leipzig, germany, 2005 [3] heinrich, m.: markante punkte und 3dobjektkanten in einem oberflächenmodel, diploma thesis (unpublished), htwk leipzig, germany, 2010 [4] bernardini, f., mittleman, j., rushmeier, h., silva, c., taubin, g.: the ball-pivoting algorithm for surface reconstruction. ieee transaction on visualization and computer graphics, 5(4), oct-dec, 1999, pp. 349-359. deriving hydrological response units (hrus) using a web processing service implementation based on grass gis christian schwartze department of geography – chair of geoinformatics, geohydrology and modelling university jena christian.schwartze@uni-jena.de keywords: qgis, grass, wps, pywps, web processing service, python, hru, hydrological response units abstract qgis releases equal to or newer than 0.7 can easily connected to grass gis by means of a toolbox that provides a wide range of standard grass modules you can launch – albeit only on data coming from grass. this qgis plugin is expandable through xml configurations describing the assignment of options and inputs for a certain module. but how about embedding a precise workflow where the several processes don’t consist of a single grass module by force? especially for a sequence of dependent tasks it makes sense to merge relevant grass functionality into an own and encapsulated qgis extension. its architecture and development is tested and combined with the web processing service (wps) for remote execution using the concept of hydrological response units (hrus) as an example. the results of this assay may be suitable for discussing and planning other wizard-like geoprocessing plugins in qgis that also should make use of an additional grass server. brief background hydrological response units may be considered as spatial entities with the objective of applying them to the process of water modelling. the designation of such regions as assumed for the present work operates on physiographical characteristics of the catchment area [2] and aims at its partitioning into zones similar to each other – both topography and dynamic related. for further information such as various additions you may refer to e.g. [1] and [5]. details and sub-steps of the derivation used by the planned tool are discussed in section 4. geinformatics fce ctu 2008 67 deriving hydrological response units (hrus) using a web processing service implementation based on grass gis architecture due to the abundance of tasks a complete hru derivation consists of, it was decided to split it into modules developed as processes for pywps 2.0.1 [8]. to meet the requirements of a client/server system, albeit in this case running all components on just one single machine (including wps), a user-friendly client enabling the several tasks sequentially would be more than appropriate and has to be developed. in this context qgis gets the vote. not only on account of the python scripting support in qgis, but also because of its very well gis visualization capabilities equipped with basic, spatial tools. as pywps comes with native grass support, consequently all hru relevant computation is done by grass, here version 6.2.2. by the way, the written plugin profits i.e. from the temporary grass sessions in pywps since only important main data are swapped out when a hru task ends – no extra management of grass mapsets is needed. so in that case pywps serves as a kind of middleware between two gis, or in other words, it separates processing from visualization in the hru tool. figure 0: architecture extending qgis in order to write a new extension for qgis [6] you start work in an empty subfolder in /python/plugins/ of your installation directory. the plugin manager gets its information about available python plugins from the primarily created init .py file – the starting point for all upcoming implementation code. more precisely, the first activation of the plugin by the installation routine results in a call of the classfactory() function that returns a plugin instance initiating the toolbar icon, menu entries and other plugin related control items. the sample hru plugin geinformatics fce ctu 2008 68 deriving hydrological response units (hrus) using a web processing service implementation based on grass gis adaptability concerning the plugin options and functionality is mainly focused during the development. later changes and improvements in the hru derivation process should be easy to integrate. hence, a module concept was designed and the phases of the current hru work flow were mapped on ready-to-use components instantiated through python classes. if you are willing to write some extension for the hru derivation plugin you have to become acquainted with the abstract python class hrumodule. therefore, an own module designed for the process chain has to be a subclass of hrumodule and has to implement four common functions: � setinput() specifies the layout of a tabbed widget and arranges the necessary input forms. � validate() addresses relevant module input parameters, checks and formats them to a valid pywps parameter string. � updatewizard() manages the modules impact on any other tabbed widget within the plugin, e.g. enabling subsequent wizard tabs, filling out forms or predefining options in upcoming tasks. � updatemapview() handles modifications that concern visualisation of map layers and linked legend entries in qgis. the individual processes were implemented according to the guidelines in [4]. thus, the hru derivation was divided into logical units which resulted in seven module classes. once coded, you can integrate such modules using the statement self.wizard.addtab(waterflowmodule(), waterflowmodule.module_icon, waterflowmodule.module_tab_def) that embeds a tab in the wizard whose initial state is enabled as long as an other module releases it. that is why the correct schedule of derivation is guaranteed, however a return to already performed steps is possible at any time. especially for testing influence of various input parameters the backspaces are considered meaningful. in pywps [8] each process stores its assigned and calculated data in grass mapsets that do not outlive the end of the process. that means, a series of n pywps tasks is instantiated along with n temporary mapsets whose names follow the pattern tmpmapset. in spite of the alternative to handle all processes in only one but persistent and already existent grass location/mapset, the temporary version has been used. so each process implementation will end with lines containing some g.copy calls. the advantage is that any interim solution never belongs to user’s location and is removed at the end of the wps process. when it is triggered twice (or several times) the grass data would just be overwritten by the wps process while copying it to the persistent mapset. the workflow more detailed all the processes explained in the following subsections have something in common: their results are relocated from a process-owned temporary mapset to a persistent mapset inside a predefined grass location. in process code stored (estimated) computing time information proves to be helpful for the user while he tracks the execution in the wizard (see the progress bar). geinformatics fce ctu 2008 69 deriving hydrological response units (hrus) using a web processing service implementation based on grass gis preparation the qgis/grass based hru derivation starts with an option dialogue where you have to specify essential data, including the digital elevation model (dem), region characteristics (land use, soils and geology) as well as the locations of gauges. as the first noted are all raster maps, the latter one should be usually imported as a shapefile. to minimize every kind of computational effort in pending tasks users have to drag a bounding box keeping the rough catchment area in mind. the underlying wps process produces a subimage of each stated data layer using gdal/ogr and imports them to a grass location locally installed. yet another preprocessing task which is integrated into the wizard sequence as a separate module deals with the dem to obtain a depressionless elevation model (see the actual but still disabled preparation tab next to setup, not explicitly focused in screenshot of figure 1). means, another wps process is triggered that not only runs r.fill.dir multiple times but also provides slope and aspect of the area. figure 1: setup module reclassification as long as real-life surface values (gathered from whatever measuring method) represent slope, aspect and sinkless elevation data, an intersection between them is hard to handle. on that account the reclassification module expects rules defining classes of categories entered in three respective tables (figure 2). recommended ranges may be accepted or changed. internally, typical grass rule files are written and will serve as input for r.reclass. generation of waterflow related maps geinformatics fce ctu 2008 70 deriving hydrological response units (hrus) using a web processing service implementation based on grass gis figure 2: reclassification module within the next step you have to make a set of water flow oriented maps available (figure 3). this includes the drainage direction, the accumulation and the location of watershed basins. an additional raster map has to point out the segmented stream network (so called ”reaches”). there is one grass analysis tool that covers the computation of all desired maps in a single command – r.watershed. unlike in many another wps processes this almost elementary case leads to a quite concise task description in python language. speaking about watershed basins means to distinguish between such type of basin derivation defined by r.watershed and such given through r.water.outlet. the latter grass module determines a basin as you pass a geographic coordinate, e.g. a gauge position. using for instance r.water.outlet in a further wps process and a well placed overlay statement inside the gauges iteration loop constitutes a solution for a gauge oriented basin map. in terms of accurate results you will probably have to move gauges onto reaches manually. but this can be done quickly since qgis offers a vector data editing mode (figure 3, right). overlay strategy the fifth step by the wizard (figure 5) serves as a special intersection operation between actual eight preset or calculated raster maps. latter includes the reclassified dem, slope and aspect data as well as soils, landuse and geology information. in addition, the watershed basin map and the basins relative to gauges in the catchment are required. the idea is shown in figure 4a and consists of following steps: 1. load the gauge basin map from subsection 4.3 as a reference map for spatial extent of resulting hru dataset and construct a map that masks out the relevant area 2. join the mask and above-mentioned data layers separately using r.patch and apply r.null geinformatics fce ctu 2008 71 deriving hydrological response units (hrus) using a web processing service implementation based on grass gis figure 3: water flow module to redefine the null value in the new masked datasets 3. merge the non zero data in the eight maps of (2) via r.cross to a single map 4. make use of r.clump to relabel occurrences of non adjacent regions which still have the same category figure 4a: overlay method this procedure does not yet result in final hrus since so much spurious, midget areas may occur. eliminating almost pixelsized intersection snippets and their reallocation is an essential part in the postprocessing. in the range of vector data v.clean with correct parameters hits geinformatics fce ctu 2008 72 deriving hydrological response units (hrus) using a web processing service implementation based on grass gis the spot. the same is true for r.reclass.area on raster maps but with the limitation that respective areas are filled with grass nodata cell value. filling them taking nearby areas into account is one solution discussed in [3]. the next script operates in a similar way: 1. detect areas which are smaller than a specific threshold, e.g. 28125m2 (= 45 pixels, 25m resolution assumed) 2. while such areas exist do: (a) get the one-pixel-wide boundary of each area and fill the interior with null (b) for every pixel onto the boundary do: i. reassign the category value with largest occurrence in the 3x3 neighbourhood (corresponds to mode value) ii. mark the left null values as removable, minimal areas (pink colored in 2 and 3, figure 4b) as indicated in the output map (4, figure 4b) snippets are just not reallocated to one neighbour region but rather melt into adjacent areas proportionally. when the superior wps process has done that kind of cleaning the hrus obtain their final form. however, it raises the question as to whether the underlying data associated with each hru is still significant. due to the fact that the cleaning algorithm manipulates the original overlay map (see above) depending on the number of eliminating areas and their location to each other, any dominant characteristic (e.g. soil type) could be changed. for this reason a further script takes the regenerated and cleaned hru map as a type of template. based on it all data layers are checked to determine a potentially new raster category that accounts for a major portion within each hru. this is done by calling r.statistics plus mode method as aggregation option. at the end of the overlay section it appears to be appropriate to store these gained and probably new categories as labels to the hru raster map. a piped combination of r.stats, some awk commands and r.reclass on the cleaned hru data helps writing a vertical bar separated label entry that represents values for the linked data layers: [...] #var inputs: list of data layers (with new determined raster values) inputs = inp_list.rstrip(",") awk_cmd = "’{print $1,\" = \",$1," for i in range(1, len(inputs.split(","))+1): awk_cmd += "$"+str(i*2)+"\"|\"" awk_cmd += "}’" g_cmd = "r.stats -l input=%s | " % inputs g_cmd += "awk %s | " % awk_cmd g_cmd += "r.reclass --o input=%s output=%s_result" % \ (os.getenv("gis_opt_input"), os.getenv("gis_opt_input")) os.system(g_cmd) [...] topological network while the last preceding paragraph has created the prerequisites to feed physiographic properties into some model the next section focuses on how to include relations between hrus. it aims at pointing out drainages from one hru into others, furthermore into streams ingeinformatics fce ctu 2008 73 deriving hydrological response units (hrus) using a web processing service implementation based on grass gis figure 5: overlay module side catchment (routing). therefore, the topological sequence acquisition is bipartite and exemplified by figure 6 where pink lines demonstrate hru borders: ”hru to hru“ 1. respectively do a r.mapcalc to get (a) borderlines of the hru map (b) drainage direction only on borderlines from (1) – see step 1, figure 6 (c) drainage destination (id of hru) only on borderlines – see step 2, figure 6 (d) accumulation data only on borderlines – see step 3. figure 6 2. do a non null overlay only (r.cross -z) between hru source map and (1.3) to hold the ” hru to hru“ relation as raster labels 3. use (2) as base map in r.statistics to sum up accumulation data with regards to one and the same destination hru – see step 4, figure 6 4. finally overlay again (r.cross) to append the accumulation sums (3) to the ” hru to hru“ relation map (1.3) as is evident, the operations take advantage of r.cross twice. consequently, all required information about relations within the topological sequences is summarized in hru raster labels up to sample "category ; category ; ". that proves true when you have a look into the grass category file (/cats subdirectory) of the result layer: [...] 2:category 10; category 19; 53 3:category 10; category 20; 14 4:category 17; category 39; 141 [...] geinformatics fce ctu 2008 74 deriving hydrological response units (hrus) using a web processing service implementation based on grass gis figure 6: relation hru to hru according to the first two lines, hru 10 drains into hrus 19 and 20 to the value of respectively 53 and 14. using this grass category file as an input for a small awk script topology information could be easily transformed to a more general format that joins one-to-many hru relations into one output row: [...] 10 19,53 20,14 17 39,141 [...] as mentioned earlier the topology delineation is separated into two parts: one part was just discussed, the other one is still outstanding. instead of draining into nearby hrus it also would be thinkable that water flows directly into any reaches before. the fact implicates some changes in comparison to the prior approach (in figure 7 let’s assume that blue lines illustrate the stream network): ”hru to reach” 1. do a r.mapcalc considering a stream buffer into account – with the objective to get the reaches in which stream neighbour cells flow (see figure 7) 2. perform nearly the same operations like in ”hru to hru” beginning with (1.4) but geinformatics fce ctu 2008 75 deriving hydrological response units (hrus) using a web processing service implementation based on grass gis ignore accumulation accurately located on streams figure 7: relation hru to reach since step 1 marks reaches as negative numbers to avoid confusions with hru identifiers the process can carry on with parsing the category file as already done for ”hru to hru”. concluding work concatenates both into a final and all-embracing topology report. to this end, tools from unix command line are employed, for instance sort and join. only on that condition meaningful weights (with regard to total flow-out of every hru) are feasible with few awk instructions, i.e.: awk_calc_weights_in_topo = "’begin {print \"#topology n:m * format: | \ ;[ |; ...]\"} \ {for (i = 2; i <= nf; ++i) \ {split($i,a,\",\"); \ sum = sum + a[2]; } \ line = $1; \ printf line\" \"; \ for (i = 2; i <= nf; ++i) \ {split($i,a,\",\"); \ printf a[1]\";\"\"%.3f\"\" \", a[2]/sum; } \ sum=0; \ print \"\"; \ line = \"\";} \ end {}’" the ultimate result looks like: [...] 1542 1543;0.640 1655;0.175 1934;0.010 -14;0.004 -12;0.171 1543 875;0.955 1655;0.001 -12;0.044 1547 1165;0.382 1377;0.176 1468;0.029 1629;0.412 1568 1482;1.000 [...] conclusion the duration of the whole derivation process in qgis depends on the size of the selected subregion during initial wizard step (setup). the larger the chosen bounding box, the more noticeable the increase of computing time (see table 1). this is mainly attributed to the water flow oriented section of the wizard using r.watershed in the backend. at the expense of geinformatics fce ctu 2008 76 deriving hydrological response units (hrus) using a web processing service implementation based on grass gis figure 8: topology module execution time the grass module yields more accurate maps than r.terraflow [7], for which reason it was preferred. however, it remains to check whether [9] may considerably improves the performance of the watershed basin analysis. or should process implementation changed by substitution with r.terraflow, provided that whose output raster maps are barely exact enough for the hru derivation work? there is also a need for optimization regarding to that part of the overlay algorithm where resulting hrus are relabeled after removing midget areas. actually, a simple r.reclass statement does the job but not very fast which may affect the total computation time, too. watercourse, gauge catchment size number of hrus duration erlbach, thieschitz (thuringia, ger) 105 km2 2116 12 min hasel, ellingshausen (thuringia, ger) 340 km2 6832 45 min gera, erfurt-möbisburg (thuringia, ger) 850 km2 16696 2.5 h table 1 – performance of the hru derivation in grass using the qgis extension references 1. flügel, w.a. (1995): delineating hydrological response units by geographical information systemanalyses for regional hydrological modelling using mms/prms in the drainage basin of the river bröl, germany. in: kalma, j.d. & sivapalan, m. (1995): scale issues in hydrological modelling. 183-194 geinformatics fce ctu 2008 77 deriving hydrological response units (hrus) using a web processing service implementation based on grass gis 2. leavesley, g.h.; lichty, r.w.; troutman, b.m.; saindon, l.g. (1983): precipitationrunoff modeling system; users manual, denver 3. neteler, m. and mitášová h. (2008): open source gis: a grass gis approach, third edition, springer, isbn 978-0-387-35767-6 4. pfennig, b.; fink m.; krause p.; müller schmied h. (2006): leitfaden für die ableitung prozeßorientierter modelleinheiten (hru’s) für die hydrologische modellierung 5. staudenrausch, h. (2001): untersuchungen zur hydrologischen topologie von landschaftsobjekten für die distributive flußeinzugsgebietsmodellierung. dissertationsschrift. jena 6. http://www.qgis.org/ – quantumgis 7. http://grass.itc.it/ – geographic resources analysis support system 8. http://pywps.wald.intevation.org/ – python web processing service 9. http://markus.metz.giswork.googlepages.com/r.watershed fast version.tar.gz – metz, m. (2008): r.watershed.fast geinformatics fce ctu 2008 78 http://www.qgis.org/ http://grass.itc.it/ http://pywps.wald.intevation.org/ http://markus.metz.giswork.googlepages.com/r.watershed_fast_version.tar.gz java v open source gis geotools, geoserver, udig java v open source gis geotools, geoserver, udig ing. jan ježek department of mapping and cartography faculty of civil engineering, ctu in prague e-mail: jan.jezek@fsv.cvut.cz key words: java, gis, open source, udig, geotools, geoserver úvod open source gis pokrývá většinu oblast́ı pro správu geografických dat. open source gis produkty lze rozdělit na dvě hlavńı skupiny, a to na produkty napsané v jazyce c (c++) a na produkty v jazyce java. v jazyce c v tomto jazyce jsou to předevš́ım umn mapserver, grass, thuban a knihovny gdal/ogr, proj4, geos. v jazyce java v tomto jazyce jsou to geoserver, udig, jump a knihovny geotools, jts. základńım stavebńım kamenem každého open source gis je možnost připojeńı k databázi postgis/postgresql, které umožňuj́ı obě skupiny. projekty v jazyce c obecně plat́ı, že projekty v jazyce c jsou mnohem vyzráleǰśı, a to předevš́ım d́ıky deľśımu časového obdob́ı jejich vývoje. základem těchto projekt̊u jsou softwarové knihovny znázorněné na následuj́ıćım obrázku. knihovny lze stáhnout např́ıklad jako produkt fwtools. (http:// fwtools.maptools.org/). fwtools geinformatics fce ctu 2006 144 http://fwtools.maptools.org/ http://fwtools.maptools.org/ java v open source gis geotools, geoserver, udig sd́ılené knihovny proj4 knihovna pro práci s kartografickými zobrazeńımi v jazyce c. hlavńım autorem obou knihoven je frank warmerdam. web site http://remotesensing.org/proj/ geos geos je “geometry engine, open source“. jedná se o implementaci jednoduchých prostorových prvk̊u podle ogc specifikace “simple features for sql” a metod pro topologii. knihovna je vytvořená v c++. web site: http://geos.refractions.net/ projekty v jazyce java projekty v jazyce java prob́ıhaj́ı vývojem a momentálně proto nemůžou ještě př́ılǐs konkurovat těm v jazyce c, a to předevš́ım kv̊uli problémům při práci z obsáhleǰśımi daty. přesto se zde vyv́ıj́ı komplexńı řešeńı všech část́ı gis produkt̊u. existuje několik nezávislých projekt̊u (openmap), ale také komplexńı řešeńı na bázi knihoven a jejich implementaćı do desktop i do server gis produkt̊u. schéma vztah̊u mezi knihovnami a gis produkty je patrné z následuj́ıćıho obrázku: sd́ılené knihovny geoapi geoapi je skupinou java rozhrańı vycházej́ıćıch z ogc specifikaćı. geoapi definuje návrh objekt̊u a jejich metod pro základńı operace z geografickými daty. ćılem geoapi je vytvořeńı standardńıho systému java rozhrańı tak, aby bylo možné propojovat jakékoliv nově vytvořené knihovny s těmi stávaj́ıćımi. web site: http://geoapi.sourceforge.net/ jts topology suite geinformatics fce ctu 2006 145 http://remotesensing.org/proj/ http://geos.refractions.net/ http://geoapi.sourceforge.net/ java v open source gis geotools, geoserver, udig jts topology suite je základńı knihovna použ́ıvaná prakticky ve všech gis produktech v jazyce java. tato knihovna představuje obdobu knihovny geos v jazyce c tzn. představuje implementaci opengis “simple features specification”. knihovna obsahuje topologické funkce jako contains(), intersects(), touches() a coesses(). web site: http://www.jump-project.org/ geotools geoltools představuje open source java gis toolkit pro vývoj gis produkt̊u z velkým d̊urazem na ogc specifikace. ćılem projektu je vývoj java objekt̊u potřebných pro finálńı gis produkty (jde o jistou obdobu arcobjects od firmy esri). velký d̊uraz je kladen předevš́ım na modularitu celého systému, tak aby uživatel mohl využ́ıvat jen ty části, které skutečně potřebuje. web site: http://docs.codehaus.org/display/geotools/home vybrané gis produkty geoserver geoserver je implementaćı web feature server specification opengis konsorcia založené na jazyce java (j2ee). aplikace je postavena na knihovně geotools, což umožňuje oddělenou správu základńı logiky. z technického hlediska se jedná o webovou aplikaci založenou na jsp a servletech funguj́ıćı pod některým z aplikačńıch server̊u (např. tomcat). základńı př́ıklad aplikace můžete vidět např. na adrese http://b9701.fsv.cvut.cz:7080/geoserver/. oproti nejrozš́ı̌reněǰśı obdobné aplikaci umn mapserver vyniká předevš́ım jednodušš́ı instalaćı i obsluhou. v současnosti umožňuje serverovat tyto datové formáty: � oracle spatial � arcsde � postgis � esri shape files tato data jsou zpř́ıstupněna jako služby wfs, wms nebo wfs-t (služba umožňuj́ıćı i editaci dat). zat́ım bohužel ještě nebyla implementována podpora serverováńı rastrových dat, avšak jej́ı zakomponováńı je otázkou př́ı̌st́ıch měśıc̊u. zaj́ımavost́ı je plánovaná podpora služby wcs (web coverage service ), která umožňuje serverovat multidimenzionálńı rastrová data např. rastrová data spolu s informaćı o nadmořské výšce pixelu (digitálńı model terénu). daľśım výhodou oproti kunkurenčńım produkt̊um je možnost serverovat data ve formátu kml, a tak je zobrazovat v aplikaci google earth viz obr. udig produkt udig (user-friendly desktop internet gis) představuje desktop gis produkt. spolu s aplikacemi geserver, geotools i postgis je i udig vyv́ıjen firmou refractions research. je postaven na knihovně geotools a na technologii eclipse rich client platform. v současnosti udig poskytuje tuto funkcionalitu: geinformatics fce ctu 2006 146 http://www.jump-project.org/ http://docs.codehaus.org/display/geotools/home http://b9701.fsv.cvut.cz:7080/geoserver/. java v open source gis geotools, geoserver, udig geoserver + google earth � wfs client read/write umožňuje jak prohĺıžeńı tak editaci dat poskytovaných prostřednictv́ım služby wfs a wfs-t. � wms client umožňuje prohĺıžeńı dat zprostředkovyných pomoćı wms služby � podporuje styled layer descriptor (sld), umožňuje barevnou tematizaci grafických podklad̊u (přiděleńı barvy prvku dle hodnoty jeho atributu) � podpora tiskového výstupu � podpora standardńıch gis formát̊u � podpora práce se souřadnicovými systémy � podpora připojeńı databáźıpostgis, oraclespatial, arcsde, and mysql. � udig je nezávislý na platformě windows, os/x, and linux. závěr rozš́ı̌reńı jazyku java se výrazně odráž́ı i ve vývoji open source gis. popsaná skupina produkt̊u naznačuje budoućı vývoj v této oblasti. kladem uvedeného řešeńı je předevš́ım veliká modularita a podpora ogc specifikaćı. během př́ı̌st́ıch let bude zaj́ımavé sledovat vývoj těchto produkt̊u, které jsou možnou open source alternativou ke komerčńım produkt̊u firmy esri. postgis jako alternativa arcsde, geoserver jako alternativa arcims, udig jako alternativa arcmap, geotools jako alterantiva arcobjects. stále se však jedná o produkty ve geinformatics fce ctu 2006 147 java v open source gis geotools, geoserver, udig udig vývoji, které se potýkaj́ı s řadou problémů, předevš́ım co se týče rychlosti a práce s obsáhlými daty. reference 1. open geospatial consortium,inc1 open geospatial consortium,inc. home page [200605-10] 2. refractions research2 refractions research home page [2006-05-10] 3. fwtools3 fwtools home page [2006-05-10] 1 http://www.opengeospatial.org/ 2 http://www.refractions.net/ 3 http://fwtools.maptools.org/ geinformatics fce ctu 2006 148 http://www.opengeospatial.org/ http://www.refractions.net/ http://fwtools.maptools.org/ future of open source systems karel charvát czech centrum for science and society charvat@ccss.cz keywords: open source, licensing, foss base business models. swot analysis, knowledge society, knowledge economy abstract software distribution strategies have many aspects and can be analysed by reviewing different incisions of a strategy. the focus of this paper is on licensing aspect involves licensing strategy, licensing risks, licensing enforcement costs. furthermore, by formulating licensing strategy main technical and logistical aspects are predicted also. the key issues of this paper are different business modes for foss software and also swot analysis of usage and development of foss software from point of view of different user groups. this analysis was provided as part of work of humboldt ip and collaborative@rural ip. currently this strategy are important issue of members of czech centre for science and society and wirelessinfo living lab, where the models based on dual licensing are key strategy. introduction on one side open source software start to play more and more important role on market of geospatial solution, on opposite side, till now you can hear very often argument, that open source software is not suitable for large projects. this argument is very often used by producers of closed commercial solution, but also very often public tenders prefer closed commercial solution. if we will look on situation on the market, we can see two important aspects: � on one side relatively good market success of web based solution. � almost zero acceptance of open source desktop gis solution by standards users (non foss community, mainly formed by people from academic communities). for the future grow of open source, which could be connected only with market grow, it is necessary to look on such models, which will attract producers of basic component to publish their systems as open source, but which will give them chance also to have profit from large scale usage of their components. the goal is in principle to build communities, where will be chance also generate profit for primary producers of components. the way is probably dual geinformatics fce ctu 2009 45 charvát k.: future of open source systems license based on combination of gpl license with proprietary license. from models, which are currently used, the most suitable seems to be model of geoserver. free software or new business model is it open source the way of free sharing of software or is it new future business model? i am sure, that on this question doesn’t exist one answer, but that there are two groups of people and every group will protect their own ideas. the initial idea coming from richard matthew stallman was idea of free software. the current development on software market demonstrate, that this idea could be stolen by commercial sector and could lead to new situation on market and that open source and new open source business models. the question is, if there will be chance to find common language between business people on one side and protectors of ideas of free software on opposite side. this paper mainly explains position of business sector, the interest of companies developing open source and also potential new methods of collaboration. the free software ideas protector mainly use such terms like freedom and creativity, for business is main goal end user satisfaction. from point of view of major end user community, the main question is not, is software is open or not, but if it is good or not. usually more important question is, if software is interoperable, then if it is open. on opposite side, there is necessary mentioned, that there are many good ideas and software coming from open source community mainly academic community, which could have future strong utilisation on the market, if there will be good cooperation between both groups. in next text, i will be focused mainly on web based technologies, because my personal opinion, that there is future for open source business. knowledge society and knowledge economy knowledge society and knowledge economy are two terms, which are very often used now and which are also very often mix together. there are strong differences and these differences are connected with both open source approaches. knowledge societies have the characteristic that knowledge forms a major component of any human activity. economic, social, cultural, and all other human activities become dependent on a huge volume of knowledge and information. the knowledge economy is a term that refers either to an economy of knowledge focused on the production and management of knowledge or to a knowledge-based economy. the important question is, if it is possible to connect both ideas, knowledge society and knowledge economy. is the knowledge creative force or good? knowledge is important for future development of society, but knowledge also could bring advantage on the market. there exist many ideas, that free knowledge could solve any social problems, that it could destroyed social barriers. that free knowledge could help to grow the business. but it is not fully true. free knowledge is advantages for stronger players on the market. living lab model when targeting full impact towards knowledge society and economy it is essential to have an integrated view on the technology development, usability and deployment of new solutions. living labs (lls) – are created to develop the best practices, and the best practices then geinformatics fce ctu 2009 46 charvát k.: future of open source systems are widely deployed throughout europe with support of funds from regional, national and european sources. ll is not to be understood as technology integration or test bed. the concept is much wider than that. lls are referred as innovation, validation and deployment environments. lls are environments which meld together the technological innovation, the application innovation and the societal experiment of knowledge society and economy. lls are open environments, where the innovation meets the needs of ”real” people, in ”real environments” leading to systemic solutions which fully take into account the usability perspective. these environments are large enough enabling the development of sustainable business and societal models, widely deployable outside. ll is more about creating together a process and a suitable environment for the process rather than traditional test environments. it should be a driver towards continuous innovation. to establish these environments, the public sector has a role to initiate the drive, by providing the kernel for further exploitable infrastructure and services. this should be done in conjunction with main industrial players, by sharing the risk and allowing for these societal experiments. verification and deployment of the spearheads can be carried forward with the support of regional, structural and local funds, while the private sector would take the responsibility of the product and service development, and the wide deployment of the solutions developed in lls. guarantee open source software development model sustainable development of living lab current paradigm is that not only development of software, but all knowledge inside of living lab has to be built on principle of open source. this could of cause speed up innovation inside of living lab, but on opposite side, these models doesn’t guarantee long time sustainability of living lab and cannot be implemented as single unique model. we compare our practical experiences with theoretical results of humboldt project and we could conclude our experiences into next topics: there exist real of sme it developers to use open source for building application. as main advantage was mentioned: � there can be found the program which suits the end user’s needs absolutely. � the end user can be engaged into the development directly and ”leave there his own footprint”. � sometimes the program could be very simple and the end user can easily grasp how it works. � user can just cut off the usable part of the code and starts his own project on this. � it is possible to use a source code from another project if both licenses allow that. on opposite side, there is small interest of sme developers to publish their components as open source. as main threats are mentioned: � according to the open philosophy it is hard to get some fees for the program usage. � it is necessary to change the business model. source of money revenue is not the sales of program, but additional services. geinformatics fce ctu 2009 47 charvát k.: future of open source systems � the user are sometimes quite ungrateful or even rude, so it is hard to deal with them. � the group can split apart with all the source codes and found the new company, so called ” fork“. this is mainly caused by personal arguments inside of a team. or simply rival company can take over the development and introduce better business plan. � it can happen, that very important developer can leave the company and the right substitution will not be found. the reason for this (leaving the company) may be also very ridicules. end user point of view the end users see as advantage of open source, that it can be deployed for free (no fees, no maintenance cost) without restrictions and therefore might be available almost instantly, compared to common procurement procedures and usually open code (source) allows legal changes to fit user’s requirements. also, open source developments usually cater for enduser requirements and develop mainly to address these requirements instead of following some arbitrary requirements by others (such as copy protection features requested by content industry). it is possible easily adapt of the open source framework to the specific needs and programming of additional extensions (modules). important is aspect, that long term accessibility is usually guaranteed and for an open platform, there are often more open plugin and extensions available. as main disadvantage is mentioned, that support on voluntary basis alone may be insufficient and sometimes the documentation is missing and training of a new staff may be difficult. there are no guarantees and warranties have to be subjects of special consideration, as they are not explicitly regulated. recommendation the key issue is to be possible offer with open source software not only free products, but also to offer commercial services, commercial documentation and training. this is key issue for commercial success of open sources. from this point of view is necessary, that primary producer offer also commercial services or there exist members of community, who are able to guarantee this support. open source from point of view of software integrators in the analysis provided by humboldt projects as main advantages of open sources were mentioned: � there is no need to pay for the software nor for support. � redistribution of oss is possible, under many licenses. � own products can be distributed with the oss fully integrated. � company can influence the development by providing resources to an open source project or simply address the programmer with specific remarks. geinformatics fce ctu 2009 48 charvát k.: future of open source systems � there is a chance to take a part in the development process, even if only helping with the translation or so. � open source developments are often highly configurable and thus can form a base for commercial developments of many different directions (the eclipse rcp is used both as a base for gis and as an ide, for example). � programs are usually done with emphasis on the backwards compatibility, new releases are fully compatible with the older versions. � there is a social network around the developers and between the end users. � under certain circumstances, many libraries or solutions can be integrated into the new product which would not be allowable if the new product was closed source. � programs usually use well known standards and don’t come with own new standards. � source code is being reviewed many times by the community. as main advantage were mentioned � for many businesses, high costs incur since their users have difficulties to switch to the open source platform, especially concerning the operating systems and its administrations a result of the previous point, there are still commercial os prevailing on the workstations. there can be difficulties with communication with the environment especially what’s regarding the exchange formats. � for some customers, the notion of using open source software is unacceptable because of perceived security issuest is practically not possible to get revenue by selling licenses. there are quite a few different business and usage models based on the various styles of open source licenses. the decision of software integrators about usage of open source often depend on concrete license. gnu gpl the positive aspects of gnu gpl license are � maintenance of gpl for derivative software thus guaranteeing free access and distribution. � most commonly used free software licensing method. � the integrator can view and use the source code for his development activities. � the integrator can modify the source code for his products. � the integrator can provide his software under his own name. � the integrator will have a good community support. � competitors cannot use the software to create closed-source derived products. as negative aspects it was mentioned � sds have to make their modifications available for all who use their products. geinformatics fce ctu 2009 49 charvát k.: future of open source systems � libraries wanting to use the sd’s product must also be under the gpl, thus possibly limiting acceptance. gnu lgpl license the positive aspects of gnu lgpl license are: � the integrator can use the software for his products without limitations. � the integrator can modify the source code for his products without limitations. � the integrator sd can provide his software under his own name. � the integrator will have a good community support for the library. � the integrator can use the product licensed under the lgpl even in proprietary systems. � the library itself will be updated by the community, since changes have to be contributed back. as negative aspects it was mentioned � permits use of the library in proprietary programs thus enabling the possibility to not access all software that uses the library. � competitors might use the software and create concurrence commercial products. generally integrators prefer lglp license against gpl. bsd/mit the positive aspects of bdsm/mit license are: � sds can use the source code for their products. � sds can modify the source code for their products. � sds can provide his software under their own branding. � sds can use for open source or closed source products. � it’s possible to gain much revenue with the software products. � the flexibility towards different business models is unlimited, all business models can be applied. as negative aspects it was mentioned � there might be a lesser advancement. � licensing can creep away from open source, thus scaring customers who value open source licensing models. � generally it could be mentioned that integrators prefer possibility to use of the library in proprietary programs thus enabling a greater community to use it and also permits geinformatics fce ctu 2009 50 charvát k.: future of open source systems modifications of the software without the need to republish them. there is danger that many developers will make closed source products, so there might be no good community support. conclusion of integrators view software integrators prefer to use other type of license then gpl. this is advantage for them, that they can use these open source components inside of their solution, but there is no benefit for open source community. open source from point of view of sme tools developers in principle there is small interest of sme developers to publish their components as open source. as main threats are mentioned: � according to the open philosophy it is hard to get some fees for the program usage. � it is necessary to change the business model. source of money revenue is not the sales of program, but additional services. � the user are sometimes quite ungrateful or even rude, so it is hard to deal with them. � the group can split apart with all the source codes and found the new company, so called ”fork”. this is mainly caused by personal arguments inside of a team. or simply rival company can take over the development and introduce better business plan. � it can happen, that very important developer can leave the company and the right substitution will not be found. the reason for this (leaving the company) may be also very ridicules. above mentioned points are very important and it is difficult to overcome this opinion. also our analysis demonstrated that most of useful open source products were in the beginning supported by certain form of public subsidies (direct intervention or development on universities). our experiences based on eight year of open source usage and development from point of view of smes could be concluded into next points: � for successful opening your products as open source on the market you need to have certain, strong market position, which guarantee you, that your profit from opening of your solution will be higher, then your potential loses of part of market. � it could be very useful to open or your older solution or product, which is not main part of your portfolio. this could bring you big marketing profit. � it is useful to open as an open source such product, which could support selling your other products, for example libraries or solutions, which depend on your commercial products. gnu gpl license geinformatics fce ctu 2009 51 charvát k.: future of open source systems the advantage of gnu gpl licence for tools developer is possibility to build community of developers. a possibly good community support will make it easier to add new services or to make existing services more efficient. it is guaranteed that licensing will not change and modifications done to the source code for in-house applications do not need to be shared. the main disadvantage is that the efforts invested by the service provider into the software can create advantage also for competitors. gnu lgpl licence the only one advantage of gnu lgpl licence for tools developer is possibility to build community of developers. on other hand, there exist a lot of problems, the main are, that the ermits use of the library in proprietary programs thus enabling the possibility to not access all software that uses the library and that the competitors might use the software and create concurrence commercial products. bsd/mit also here is the only one advantage of bds/mit licence for tools developer is possibility to build community of developers. the disadvantage are that the licence of fork could be changed on non open source, permits use of the library in proprietary programs thus enabling the possibility to not access all software that uses the library, many developers will make closed source products, so there might be no good community support. other possibility wirelessinfo licence in some cases tools developers prefer open source only imide of certain community for example living lab. in many ll exist opinion, that open source model cannot be recommended universally. from this reason, we introduced new type of a licence (wirelessinfo licence), which combine both approaches and advantages from commercial development and open source developments. source code is managed by one organisation as for open source, but it is not generally free. the source is open for other organisation (smes) after signature of this licence, which guarantee to initial developer certain amount of money after selling applications, which will used this components. the number of payments is usually limited on selling first 10 or 20 licences, after is usage free. new users cannot distribute source code to third persons. conclusion from tools developers’ point of view from point of view of commercial tools developers only gpl licence could in principle bring some advantages and could support market position of developers. all other licence give more disadvantages then advantages. geinformatics fce ctu 2009 52 charvát k.: future of open source systems conclusion the previous analyses show, that open source development could bring a lot of advantages not only for research, but also for commercial community. mainly in this period of economical crisis, using open source could help for both, users (private and public), but also for all it sector. there is fact, that on one side, the commercial software is currently extremely expensive and for end users it is difficult to pay license fee for commercial software. the european commission promote to use open source and usually there are the some recommendation on national levels. on other side usually the conditions of tenders are such, that open source producers are excluded from tendering process. there is very often used argument of low quality of open source products. nevertheless this period of economical crises brings excellent opportunity to change situation inside of ict sector. the situation give more opportunity for open solution and also for smaller flexible companies, which will be able to adopt their behaviour and react on new situation. other important issue is large cooperation of small companies and not only inside of one country, but internationally. and it is important also find the way, how this sme commercial sector could cooperate with academic sector. this could be main advantage of open sources. this all is excellent opportunity for open source solutions, but to be possible really guarantee user satisfaction, it is necessary to guarantee commercial support and also guarantee more flexible possibilities of licensing. as it was demonstrate earlier, the gpl license could protect interests of developers, but very often is not accepted by integrators. and without strong acceptance of open sources by integrators it is difficult to guarantee better position of open sources on market. so it is important to change basic paradigm of open source community. the current philosophy (usually called virus or militant) is – to use and contribute. this philosophy is mainly supported by gpl license. my personal opinion is, that this philosophy has to be changed and that the future is in the model – to use and contribute or pay. the best solution for this is combination of gpl and commercial license (with commercial support). money coming into development could increase power of community. this model is now used for example by geoserver community. currently we started to use this model inside of czech living lab, which is organised around wirelessinfo and ccss. references 1. armantas ostreika / ktc; michael printzos / pro; guillermo schwartz / logcmg, thorsten reitz/fhg-igd, karel charvat/hsrs, humboldt, a2.4-d1 software distribution strategies and business models, 2007 2. arel charvat, petr horak, sarka horakova living lab (ll) business models for local development, echallenges 2008, stockholm 3. business models analysis for czech living lab (cll) c@r project internal analysisi 4. wirelessinfo licence http://www.wirelessinfo.cz geinformatics fce ctu 2009 53 http://www.wirelessinfo.cz geinformatics fce ctu 2009 54 pywps 2.0.0: the presence and the future jachym cepicky help service – remote sensing s.r.o černoleská 1600 256 01 benešov u prahy jachym at bnhelp dot cz keywords: ogc, web processing service, wps, pywps, gis, grass, foss4g abstract this paper presents current status of pywps1 program, which implements ogc web processing service standard. pywps 0.2.0, which was released only recently, implements ogc wps 0.4.0. nowadays, ogc is preparing the wps 1.0.0 standard, with slightly different characteristics. next versions of pywps should implement this version too. ogc web processing service the open geospatial consortium, inc.® (ogc) is a non-profit, international, voluntary consensus standards organization that is leading the development of standards for geospatial and location based services. ogc specifications are technical documents that detail interfaces or encodings. software developers use these documents to build support for the interfaces or encodings into their products and services. one of this document ogc web processing service (wps). it is relatively new standard (compared to other, more used standards, like ogc web mapping service (wms) or web feature service). the document number 05007r4 describes version 0.4.0 of the wps standard. request for comments to this standard was published february 2006. in june 2007, version 1.0.0 of this standard was released. the standard describes the way, how geospatial operations (referred as “processes”) are distributed across networks. wps server can be configured to offer any sort of gis functionality to clients across network. process can be simple calculation, like adding to raster maps together or making buffer around vector feature, as well as complicated models, as for example climate change model. the main goal of wps is, that computational high-demanded operations are moved from client stations (general desktop pc) to server. 1 http://pywps.wald.intevation.org geinformatics fce ctu 2007 61 http://pywps.wald.intevation.org pywps 2.0.0: the presence and the future wps request types three types of request-response pairs are defined. request can be in key-value-pairs (kvp) encoding, as well as xml document. server response is always formatted as xml document. � getcapabilities – server returns capabilities document. first part of the document includes metadata about server provider and other server features. second part of the document includes list of on server available processes. � describeprocess – server returns processdescription document. apart from process identifier, title and abstract, process inand outputs are defined. � execute – client overhands necessary inputs for partial process, the server provides geospatial calculations and returns document with all process outputs. data types three basic types of inand output data are defined: � literaldata – character strings as well as integer or double numbers. � boundingboxdata – two pairs of coordinates � complexvalue and complexvaluereference – input and output vector and/or raster data. vector data (e.g. gml files) can be directly part of request/response execute document (than they the input is of type complexvalue). client can specify only url to input data (e.g. address to web coveradge server (wcs)). in this case, the data are of type complexvaluereference. pywps the pywps program implements ogc web processing service (wps) standard in it’s 0.4.0 version. it can be understand as proxy, between common gis command-line-oriented programs, like grass gis, gdal/ogr tools as well as r statistic package and internet. pywps is free software, published under gnu/gpl. currently, pywps has three developers, and about 40 people registered in mailing list. from beginning, it has been developed with direct support for grass gis. pywps homepage can be found at http://pywps.wald.intevation.org. pywps usage examples several examples are provided, which are demonstrating usage of ogc wps. all examples are web browser oriented. wps demos two demo applications are provided, with same functionality. help service – remote sensing demo running at http://www.bnhelp.cz/mapserv/wpsdemo/ geinformatics fce ctu 2007 62 http://pywps.wald.intevation.org http://www.bnhelp.cz/mapserv/wpsdemo/ pywps 2.0.0: the presence and the future http://www.bnhelp.cz/mapserv/pokusy/openlayers/wpsdemo prefarm as part of the premathmod project, which is an statistical and mathematical modeling, data analysis simulation and optimization methodologies for precision farming, pywps was used. on the server, grass gis is used, to calculate optimal fertilization over fields. embrio interface lorenzo becchi made wps interface for his map viewer ka-map. it works together with pywps and till now, only basic functions are implemented, like shortest path calculation, vector buffers and lines of sight. openlayers wps plugin wps can be also used together with openlayers map viewer. new openlayers.control class was written, which serves like wps client. it communicates with the server directly and can be used for any kind process. what’s new in wps 1.0.0 in july 2007, new version of wps standard was launched. this version implements some missing functionality of previous version of the standard, mainly: � soap/wsdl api � new and more rich kvp request encoding definition � better input specification, including e.g. maximal file size � basic support for wps server internationalization future of pywps current version of pywps supports only the 0.4.0 version of named standard. in the summer 2008, pywps development team would like to release new version, which would implement current version of the standard. another field of development are the clients, suitable for wps. currently, apart from proprietary client-server solutions, there are wps plugins for udig and open jump programs, both provided by 52north, and openlayers wps plugin. other gis (viewers) are now lacking for suitable wps plugin, so they could enjoy advantages of remote geoprocessing. conclusion pywps (python web processing service) implements ogc web processing service standard in it’s 0.4.0 version. several applications, which are using pywps are available on internet as well as intranet applications. in the future, new version of wps standard should be implemented, as well as new wps client plugins should be developed. geinformatics fce ctu 2007 63 http://www.bnhelp.cz/mapserv/pokusy/openlayers/wpsdemo pywps 2.0.0: the presence and the future references 1. ogc web processing service 0.4.0, editors: peter schut, arliss whiteside, ref. number: ogc 05-007r4, open geospatial consortium inc. 2005 2. pywps2 – implementation of ogc wps specification 3. openlayers3 – javascript library for dynamic maps provided by metacarta 4. ka-map!4 – javascript api for developing interactive web-mapping applications 2 http://pywps.wald.intevation.org 3 http://openlayers.org 4 http://ka-map.maptools.org/ geinformatics fce ctu 2007 64 http://pywps.wald.intevation.org http://openlayers.org http://ka-map.maptools.org/ ________________________________________________________________________________ geoinformatics ctu fce 2011 354 the standard of management and application of cultural heritage documentation yen ya ning1, weng kuo hua2, cheng hung ming3, hsu wei shan4 china university of technology, 1faculty of architecture, 2,3faculty of interior design, 4graduate student of architecture no.56, sec. 3, singlong rd., wunshan district, taipei city 116, taiwan (r.o.c.) alexyen@cute.edu.tw keywords: cultural property, database, data interpretation, digitizing, official integration data system abstract: using digital technology for cultural heritage documentation is a global trend in the 21 st century. many important techniques are currently under development, including 3d digital imaging, reverse engineering, gis (geographic information systems) etc. however, no system for overall management or data integration is yet available. therefore, we urgently need such a system to efficiently manage and interpret data for the preservation of cultural heritages. this paper presents a digitizing process developed in taiwan by the authors. to govern and manage cultural property, three phases of property conservation, registration, restoration and management, has been set up along a timeline. in accordance with the laws of cultural property, a structural system has been built for project management, including data classification and data interpretation with self-documenting characteristics. through repository information and metadata, a system catalogue (also called data dictionary) (figure 1) was created. the primary objective of the study is to create an integrated technology for an efficient management of databases. several benefits could be obtained from this structural standard: (1) cultural heritage management documentation can be centralized to minimize the possibility of data re-entry resulting inconsistency, and also to facilitate simultaneous updating of data; (2) since multiple data can be simultaneously retrieved and saved in real time, the incidence of errors can be reduced; (3) this system could be easily tailored to meet the administrative requirements for the standardization of documentation exchanged between cultural properties institutions and various county and city governments. data mining interpretation and presentation user interface design main purpose administrative management digital archives digital monitor knowledge management added-value data normalization database document video sound others phase i registration phase iii management phase ii restoration management needs cultural heritage diversity figure 1: an integrative structure of data dictionary for cultural heritage ________________________________________________________________________________ geoinformatics ctu fce 2011 355 1. background using digital technology for cultural heritage documentation is a global trend in the 21st century. many important techniques are currently under development, including 3d digital imaging, reverse engineering, gis (geographic information systems) etc. however, no system for overall management or data integration is yet available. therefore, we urgently need such a system to efficiently manage and interpret data for the preservation of cultural heritages. government as a key role of information management must be concerned and aware of the information provided to the public based on legal requirement. from the perspective of knowledge management, 4 phases are included: production, processing, dissemination and application. concerning the diversity of heritage, the understanding and value assessment will give a help for the conservation interpretation and presentation. 1.1 goals this paper discusses standards of digitizing process to govern and manage cultural property in taiwan. there are two main goals: to propose the standard for the production of information based on the requirement of diversity and management. to propose a dbms (database management system) based on the km (knowledge management) concept and providing an integrated service. 1.2 methods two main methods were used in this study. document method is analyzing and finding the characteristics of cultural heritages. the other is rapid prototype method which is integrated with information technology. 1.2.1 document method 1. to clarify the characters and value of cultural heritage by reviewing the international conventions, charters, documents. 2. to analyze the trend of digitizing technology for heritage documentation by reviewing the research papers. 3. to explore the requirement of cultural heritage management in taiwan by reviewing legal documents and case studies. 4. to explore frameworks established with knowledge in cultural heritage by reviewing km papers. 1.2.2 rapid prototype method this paper uses the rapid prototype method to set up an integrated management system, evaluating its feasibility and convenience in future expansion by practice. 2. the characteristic and value of cultural heritage it has been nearly 50 years since the inception of the venice charter in 1964 declaring the importance of conservation. many significant concepts and practices have been promoted through various literatures after that. 2.1 value the venice charter points out that “……its aim is to preserve and reveal the aesthetic and historic value of the monument and is based on respect for original materials and authentic documents. ……”. under the efforts of unesco, the 1972 convention concerning the protection of the world cultural and natural heritage developed a new era for the world heritage. an idea of ouv (outstanding universal value) has been pointed out as the most important issue in the evaluation and preservation process of world heritage. 2.2 authenticity and integrity the 2008 operational guidelines for implementation of the world heritage convention declared that “to be deemed of outstanding universal value, a property must also meet the conditions of integrity and/or authenticity…”. there are many international documents concerning these 2 issues. such as: 1.the washington charter, 1987 2. the nara document on authenticity, 1994 3. convention for the safeguarding of the intangible cultural heritage, 2003 ________________________________________________________________________________ geoinformatics ctu fce 2011 356 4. charter on cultural routes, 2008 2.3 cultural diversity many national or international documents are concerned with cultural diversity. such as: 1. the burra charter, 1979 2. universal declaration on cultural diversity, 2001 3. convention on the protection and promotion of the diversity of cultural expressions, 2005 2.4 management and education in 2008, the 16th general assembly of icomos approved the quebec declaration on the preservation of the spirit of place. this declaration points out the importance of re-thinking, identifying the threats, safeguarding and transmitting the spirit of cultural heritage sites. the assembly also approved the charter for the interpretation and presentation of cultural heritage sites. the charter concludes the significance of “value” and proposes 7 principles as means of enhancing public appreciation and understanding of cultural heritage sites. 2.5 summary 2.5.1 the trend the ideas of cultural properties preservation in the early stage put great emphasis on physical preservation and supplemented with texts and images. in 1964, article xvi of venice charter pointed out the importance of documentation and files. during the last 2 decades, concept of conservation had changed enormously with knowledge available and information easily obtained to all. the changes are reflected in following areas: 1. the expansion of heritage areas (1) expanded from tangible to intangible, promoting the preservation of intangible heritage. (2) expanded from the point to line to a plain, opening the path of cultural landscapes and cultural route as new research issue. (3) expanded from physical conservation to overall thinking of lifestyle. 2. respecting for ordinary people (1) highlight diversity and cultural diversity. (2) highlight preservation of non-artistic (such as tools, materials, construction methods, technology, etc.). (3) highlight the rights and necessity of public knowledge and participation. 3. the development of active management (1) to take notice of the basic file and its method of application and other issues. (2) to take notice of management and related issues such as industry and community interaction. (3) to take notice of management derived from economic, legal and other issues. these changes touch on the content of knowledge and data as well as the process of dissemination and reception. in the 21st century of digital era, how to use the highly popular system with effective management and delivery, communication and preservation of information, has become an unavoidable issue. 2.5.2 the role of digital technology computer and internet have changed the way of knowledge dissemination through digital archives. development of digital technology not only speeds up data collection, it also helps data management, dissemination and application. preservation of cultural properties passes on human knowledge. digital technology allows effective preservation of cultural properties and analysis and has become a trend all over the world. in 2008, the 17th session of the general assembly of international council on monuments and sites (icomos) adopted the declaration of quebec (quebec declaration) pointed out that: “..in the protection and promotion of world heritage monuments and sites. it also calls upon a multidisciplinary approach and diversified sources of information in order to better understand, manage and conserve context…considering that modern digital technologies (digital databases, websites) can be used efficiently and effectively at a low cost to develop multimedia inventories that integrate tangible and intangible elements of heritage, we strongly recommend their widespread use in order to better preserve, ________________________________________________________________________________ geoinformatics ctu fce 2011 357 disseminate and promote heritage places and their spirit. these technologies facilitate the diversity and constant renewal of the documentation on the spirit of place.” cultural heritage interpretation and presentation of the charter (the icomos charter for the interpretation and presentation of cultural heritage sites) also pointed out that: “develop technical and professional guidelines for heritage interpretation and presentation, including technologies, research, and training. such guidelines must be appropriate and sustainable in their social contexts” these above-mentioned articles reiterate the importance of conserving, processing and presenting cultural heritage with digital technology. “integrated technology”, the invention from computation, has become an important digital technology since the 1980s. the business uses a lot of integrated systems such as enterprise resource planning (enterprise resources planning, erp), with its aim to link all departments of the work flow (operation procedure), in order to achieve real-time communication and produce accurate information. preservation of cultural properties includes knowledge of multiple fields such as architecture, sociology, history, surveying, structure, geography, art, etc. working with various fields of experts and academics to co-research and investigate is important. integrated digital technology has become an inevitable trend to be effectively applied to cultural properties conservation. 3. the trend of digitalization research of cultural properties 3.1 the international trend of information digitalization information production and dissemination of the development had a breakthrough in the late 20th century, thanks to the rapid development of computers and the impact of network technology. government and non-governmental organizations around the world have conducted quite a few digital cultural heritages related projects and actions and made it an international issue of common concern, including: 1. memory of the world program and e-heritage driven by unesco since 1992. 2. american memory driven by america since 1990. 3. visual museum of canada, vmc. 4. the number of library projects in australia since 1996. 5. europeane driven by european union. 6. world digital library. 7. the wikipedia. 8. taiwan digital archives driven by taiwan, roc. the above programs have the same characters in cultural theme, diverse participation, digital technology and longterm devotion. 3.2 the key of digital technology digital technology has two key issues which are knowledge management and implementation of the technology. 3.2.1 knowledge management 1. principle knowledge management is an emerging field of research and the key points being how to deal with a large number of complex data and make it useful through effective management. the basic process of knowledge management can be divided into a four-cycled procedure including data production, information management, knowledge dissemination and application (figure 2). 2. the professional gap and the necessity for integration digitization can quickly reduce the data access time. interactive web tools such as web 2.0 provide better dissemination of information publishing platform. however, whether the data has been handled by standards of knowledge management and digitalization prior to its release or not isn‟t taken seriously. neither are its management and application after. in recent years, the so-called “database” has turned into “data graveyard” which amply explains the crisis of lacking in standard. cultural heritage and digital technology originally belong to two different fields of ________________________________________________________________________________ geoinformatics ctu fce 2011 358 expertise. with the demand for digitization and knowledge management framework, integration of the two professional fields is necessary. figure 2: four-cycled knowledge management procedure 3.2.2 digital implementation technology digitization of cultural properties is a new technology. many research results have been presented and shared at important international seminars. the topics include: 1. data acquisition and analysis: laser scanning (ground, from air), photogrammetry, remote sensing, etc. 2. data standard: standard, pointer, intellectual property, etc. 3. data transmission: transmission technology, web, gis, etc. 4. data applications: environmental monitoring, disaster prevention, education, enterprise, etc. during the past decade, several issues about cultural heritage were discussed repeatedly. they not only explained the digital technologies that were still progressing, but also reflected the diverse digital technologies and the diverse characteristics. 3.3 management purpose the purpose of preservation of cultural heritage is to sustain cultural properties through public understanding. in order to provide convenient and accurate information, a well-constructed and integrated management system is needed for handling vast and diverse data. there are two points of views about data management in the past. one is on demandoriented administration (policy-orientated), the other is public-oriented (public-orientated). government played an important role in preservation of cultural properties heritage. combined with public opinions, the study promotes a digitalized standard for management of cultural heritages. from the perspective of knowledge management, this study provides a research management framework shown in figure 3. 1. the diversity of cultural heritage properties should be taken into account on the level of management, interpretation and presentation and other needs. ________________________________________________________________________________ geoinformatics ctu fce 2011 359 2. considering life cycle of administrative procedures for cultural heritage preservation, data for the integrative management system should be divided into registry, repair and management in three dimensions. 3. the database established should meet the demands of the public and the government for its application and dissemination. 4. the integrative management system should take into account the future possibility of expansion and added value. data mining interpretation and presentation user interface design main purpose administrative management digital archives digital monitor knowledge management added-value data normalization database document video sound others phase i registration phase iii management phase ii restoration management needs cultural heritage diversity figure 3: an integrative structure of data dictionary for cultural heritage 4. a tentative management system for taiwan 4.1 background taiwan has had its cultural heritage preservation act (chp. act.) since 1982. in accordance with the influence of international movement, the act has been amended and promulgated for 7 times. in 2005, a new act with articles was taken effect, an information system and data tables were set together with this new act, and a new era of digital management for cultural heritage began. however, this system mainly focused on the first phase of conservation. almost all the tables dealt with basic information and registration process of monuments while lacking in information of restoration and management (figure 4). based on this system, an official database for the monument was established in 2008. as the former planning was in want of an integrative concept, its database was inefficient and is under rearranging at the moment. figure 4: the idea of an integrative dbms in taiwan ________________________________________________________________________________ geoinformatics ctu fce 2011 360 4.2 requirement analysis 4.2.1 action according to the chp act, 2005 the basic information for the management can be shown as figure 5.three issues can be described in advance from figure 5. 1. act and accompanying rules provide a clear procedure which needed to identify the process of preservation. it can be taken as basic information for the database. 2. the required information in 3 different phases are necessary to help the management with their importance. 3. with database management system (dbms), an efficient convenient platform for the management of cultural properties can be provided. * under the chp act 2005, there are enforcement rules and other rules prescribed by the government to describe the operational process and required information of preservation. figure 5: the flowchart (intangible and tangible) of 3 phases of information managements 4.3 the principles of data structure 4.3.1 information and data interpretation considering the life cycle of data and different types of data usage, data must be stored at different servers according to different data types to provide appropriate services. the study uses metadata to provide communication channels between different types of data servers. metadata were generated automatically by dbms and stored in public ________________________________________________________________________________ geoinformatics ctu fce 2011 361 dictionary that other dbms can access and query these metadata. for some specific usage, metadata can be added some fields in demand to facilitate follow-up data query purposes. 4.3.2 normalization analysis of information obtained by the previous stage can be divided into categories, text, images, sounds, images and other types. in order to take into account the contents of the database consistency and accuracy, every field data that were inputted by user must be normalized in order to improve efficiency and avoid unnecessary storage space and waste of bandwidth in transmission of data and future consumption. with database management system construction, different tables can be related through a unique code. this study was set up with microsoft sql 2008 system and database related model (figure 6). figure 6: database related model figure 7: integrative management system ________________________________________________________________________________ geoinformatics ctu fce 2011 362 4.4 system sample this study uses rapid prototype method to establish an integrated database management system. system demo screen. (fig.7) 5. discussion 5.1 result several benefits could be obtained from this structural standard: (1) cultural heritage management documentation can be centralized to minimize the possibility of data re-entry and resulting inconsistency, it also can facilitate simultaneous updating of data; (2) since multiple data can be simultaneously retrieved and saved in real time, the incidence of errors can be reduced; (3) this system could be easily tailored to meet the administrative requirements for the standardization of documentation exchanged between cultural properties institutions and various county and city governments. 5.2 the goal of this system the 17th icomos general assembly (2011) will take “heritage: driver of development” as a main issue. this concept identifies that the preservation and reuse of cultural heritage should play an active role and extends their sight toward an integrative field for the creation of tomorrow‟s world. for the convenience of recording, documentation and application, the development of digital techniques has become most important in support of this goal. corresponding techniques such as gis, 3d laser scan, photogrammetry, remote sensing, dbms etc., have been applied in the field of conservation. on the other hand, the main content of heritage and applications with added value is another important issue. for the documentation and application information, building an integrative frame work is needed. besides meeting the basic recording and management requirements, how to open to public, how to apply with added value and what new business model might evolve should be thought through while setting standards. 5.3 the contributor and user conservation of cultural heritage is a work of highly integration. for accuracy, efficiency and safety, it would be better for the government to take the main responsibility for the establishment and management of centralized dbms. private sectors and ngo should play an active role in such process. as to open this db to the public, the accuracy, timeliness, information security, intelligent property right, etc., should take into consideration. 5.4 main issues of the standardization 5.4.1 the content of information each monument holds vast and various data. it‟s essential to collect and sort out data efficiently according to needs then come up with an appropriate interpretation and presentation. regarding the diversity of cultural heritage together with requirement of management and added value this research points out that holding the content of monument is the key issue for the dbms. 5.4.2 the integration of different format audio and visual are two main types of digital information that are developed into various formats. these formats were modified by different software and mostly controlled by commercial companies. to build a convenient platform for the exchange, information of different formats is an important work. besides that, a cloud technique developed for the management of vast digitized information is another key issue, in this field. 5.5 the extension map using digital techniques as a powerful tool for managing, reusing, and extending value of cultural heritage has become an inevitable trend in the 21st century. besides the stakeholder, the participation team should have a wide range of experts. from digitization point of view, an extension map for the integration technique was proposed for the next step (figure 8) . ________________________________________________________________________________ geoinformatics ctu fce 2011 363 figure 8: the diagram of digitizing technique 5. acknowledgement this research was supported by nsc 99-2632-h-163-001-my2 6. references [1] unesco world heritage center (2008), operational guidelines for implementation of the world heritage convention. [2] jukka jokilehto(1999). a history of architectural conservation. pp.290. [3] prepared by the national library of australia (2003), guidelines for the preservation of digital heritage gal framework radek bartoň, martin hrubý faculty of information technology brno university of technology e-mail: xbarto33@stud.fit.vutbr.cz, hrubym@fit.vutbr.cz keywords: design, gis, grass, open source, library abstract gal (gis or grass? abstraction layer) framework is meant to be multiplatform opensource library with certain tools and subsidiary daemons for easy implementation of distributed modules for gis grass1 in static or dynamic programming languages. this article aims to present some ideas behind this library and bait a fresh meat for this project since its complexity needs more spread development team not just single person. project homepage can be found at http://gal-framework.no-ip.org. motivation a year ago i was trying to implement some feature to nviz module and i got acquainted with grass’s core libraries. accustomed to well-designed and well-documented apis like qt2, coin3 or wxwidgets4 it was shocking to me to meet such old-styled api with underestimated state of documentation, so i gave up that job in few days in anger. but because i like grass style from user’s point of view and its complexity and scalability of provided modules, i wanted 1 http://grass.itc.it/ 2 http://doc.trolltech.com/ 3 http://doc.coin3d.org/coin/ 4 http://www.wxwidgets.org/manuals/stable/wx contents.html geinformatics fce ctu 2007 33 http://grass.itc.it/ http://gal-framework.no-ip.org http://doc.trolltech.com/ http://doc.coin3d.org/coin/ http://www.wxwidgets.org/manuals/stable/wx_contents.html gal framework to provide an easy and modern approach to module development along with current ones for those that are not satisfied with present possibilities as me. so i decided to work on gal framework as my diploma work and thesis. as actual advancement of common information technology in hardware domain heads to multiprocessored devices and usage expansion of dynamic languages such as python, java, c# or ruby is here, there is need to consider these factors in grass development too. gal framework should be answer to these trends. principles gal framework should be alternative to current way of creating new modules for grass and some current modules can be reimplemented in gal framework too if some weight reasons occurs. illustration of future gal framework role in grass module development can be seen on figure 1. current grass code is displayed yellow and it is formed by grasslib and code of each of grass modules. green gal framework code is formed by core gal library and code of new grass modules which can be implemented with it or code of reimplemented modules (v.proj for example). gal framework then may depend on grasslib itself and other possible libraries such as d-bus5 for distributive execution of functions or terralib6 for faster implementation of gis related functionality (red block). figure 1: role of gal framework in module development. gal framework internally uses component architecture which means that whole system is formed by bunch of components which communicate which each other by interfaces. some components declare interfaces, some of them implement interfaces and some of them use interfaces. using uml syntax this can be displayed as on figure 2. there are four components and one interface on a figure. component 1 declares an interface interface which indicates stereotype <>. component 1 and component 2 uses in5 http://www.freedesktop.org/wiki/software/dbus 6 http://www.dpi.inpe.br/terralib/ geinformatics fce ctu 2007 34 http://www.freedesktop.org/wiki/software/dbus http://www.dpi.inpe.br/terralib/ gal framework figure 2: component architecture in uml syntax. terface (symbolized by dashed line with arrow). that means that they are calling interface’s functions. component 3 and component 4 implement interface functions which is symbolized by solid line. there is an abstraction over interface functions called slots which may be configured some ways to specify which implementation will be executed when interface slot is called. sometimes you may want to execute just lastly registered implementation, sometimes you may want to call each of them, etc. to allow public access to all available components and interfaces in the system, a common access point must be introduced. in this case, it is called component manager and it serves for new component or interface registration as long as registration of interface implementations. figure 3 shows simplified class diagram for relationships between component manager, components, interfaces and slots: figure 3: class diagram of component architecture. let’s see on figure 4 how can be this component architecture applied in practice: there is a modulecomponent component implementing a grass module at the top which uses three different (and imaginary for now) interfaces: � ivectorlayerprovider, � irasterlayerprovider, � imessagehandler for retrieving of vector or raster data and for message display and logging. ivectorlayerprovider interface is implemented by two different components registered in system. one of them shelters vector layer data in postgresql database, the second shelters for example data in mysql database. raster related interface irasterlayerinterface is geinformatics fce ctu 2007 35 gal framework figure 4: component architecture practically. then implemented by two components too. first offers raster data from postgresql and second enwraps raster data in ordinary data files. this diagram should demonstrate that using this approach module component can obtain gis data and it don’t need to care where and how are these data stored. when module does what it wants with retrieved data it needs to output some informations to console, logs or gui, it sends messages through imessagehandler interface. in discussed example is this interface implemented by two components which forwards messages to cli, gui or writes them to log files. this again demonstrates presentation of outputed data independence. usage and examples although bindings to most of commonly used dynamic languages are planned, at least for python and java, all of following code examples are written in c++ because it will be main language used during gal framework development. get known that all of this example usages reflect current state of design and may change in future. the simplest interface usage example which doesn’t create component and can be used only as a client module is listed here: #include #include #include #include #include int main(int argc, char * argv[]) { try { 01: gal::initialize(); // obtain slots of iexampleinterface functions. geinformatics fce ctu 2007 36 gal framework 02: componentmanager & component_manager = gal::getcomponentmanager(); 03: interface & interface = component_manager.getinterface("iexampleinterface"); 04: function1slot & function1 = dynamic_cast(interface.getslot("function1")); 05: function2slot & function2 = dynamic_cast(interface.getslot("function2")); // call functions’ implementation. 06: std::cout << "function 1 returned: " << function1() << std::endl; 07: std::cout << "function 1 returned: " << function2("example", 10) << std::endl; 08: gal::finalize(); } catch (exception exception) { std::cout << exception.getmessage() << std::endl; return exit_failure; } return exit_success; } code is enclosed between initialization and deinitialization functions: gal::initialize() and gal::finalize(). first we obtain reference to publicly available component manager at line 02. then we get interface object at line 03. at line 04 and 05, we get slot objects which can be remembered during whole program run time for interface’s functions execution. this can minimalize needed overhead. lines 06 and 07 do this execution. in this example we used only imaginary interface iexampleinterface but in real case it can be for example interface which provides access to raster layer and we will compute some statistical values over returned data. note, that from this point of view it doesn’t matter if interface’s slots are implemented in statically linked c++ code, loaded and executed from plug-in in shared library or they are executed distributively in python code or on distant machine via rpc call. all of this is only up to slot implementations and its configuration. the second example shows how can be certain interfaces implemented by components: #include #include #include #include #include #include #include // examplecomponent class definition. 01: class examplecomponent: public component { public: 02: virtual void initialize(); 03: virtual void finalize(); 04: virtual void * getfunction(const char * name); 05: static const char * function1(); 06: static const char * function2(const char * first, cont int second); private: 07: interface * interface; }; // component initialization. gets instance of interface and registers it // in component manager. 08: void examplecomponent::initialize() { 09: componentmanager & component_manager = gal::getcomponentmanager(); 10: this->interface = &(component_manager.getinterface("iexampleinterface")); geinformatics fce ctu 2007 37 gal framework 11: component_manager.registerimplementation(*(this->interface), *this); } // component deinitialization. unregisters interface. 12: void examplecomponent::finalize() { 13: componentmanager & component_manager = gal::getcomponentmanager(); 14: component_manager.unregisterimplementation(*(this->interface), *this); 15: this->interface = null; } // implementation of first interface function. 16: const char * examplecomponent::function1() { 17: std::cout << "iexampleinterface::function1" << std::endl; 18: return "example"; } // implementation of second interface function. 19: const char * examplecomponent::function2(const char * first, const int second) { 20: std::cout << "iexampleinterface::function2" << std::endl; 21: return "example"; } // overriden virtual function which returns function pointer to requested // implementation of interface function. 22: void * examplecomponent::getfunction(const char * _name) { 23: std::string name = std::string(_name); 24: if (name == "function1") { 25: return (void *) &(this->function1); } 26: else if (name == "function2") { 27: return (void *) &(this->function1); } 28: else { 29: return null; } } int main(int argc, char * argv[]) { try { 30: gal::initialize(); // create component instance. 31: component * component = new examplecomponent(); 32: component->initialize(); 33: gal::daemonize(); // destroy component instance. 34: component->finalize(); 34: delete component; 36: gal::finalize(); } catch (exception exception) { std::cout << exception.getmessage() << std::endl; return exit_failure; geinformatics fce ctu 2007 38 gal framework } return exit_success; } there is defined examplecomponent first at lines 01-07 in this example. each component needs to override three virtual methods: component::initialize() at lines 08-11 for component initialization and registration of interface implementation, component::finalize() at lines 12-15 for component deinitialization and interface implementation unregistration and component::getfunction() at lines 22-29 for access to interface functions implementations. main program block (lines 30-36) then contains code which creates instance of component and then turns program to daemon with gal::daemonize() method. interface implementation can be then called by external modules. in case where implemented interface is used only internally as in this example, no daemonization is needed and program would just use interface as in first example. the third example is just code snippet showing how to define a new interface with its slots: // definition of callbackslot derived slot. 01: class function1slot: public callbackslot { public: 02: function1slot(); 03: char * execute(); 04: char * operator()(); }; // definition of dbusslot derived slot. 05: class function2slot: public dbusslot { public: 06: function2slot(); 07: char * execute(char * first, int second); 08: char * operator()(char * first, int second); }; // slot signature initialization. 09: function1slot::function1slot(): 10: callbackslot() { 11: this->addreturnvalue(string); } // slot signature initialization. 12: function2slot::function2slot(): 13: dbusslot() { 14: this->addargument(string); 15: this->addargument(int32); 16: this->addreturnvalue(string); } // arguments and return values packing and unpacking for d-bus call. 17: char * function2slot::execute(char * first, int second) { 18: char * result = null; 19: this->setargument(0, (void *) first); 20: this->setargument(1, (void *) second); 21: this->setreturnvalue(0, (void *) &result); geinformatics fce ctu 2007 39 gal framework 22: this->callfunction(); 23: return result; } // syntax sugar for use of slot as a functor. 24: char * function1slot::operator()() { 25: return this->execute(); } // syntax sugar for use of slot as a functor. 26: char * function2slot::operator()(char * first, int second) { 27: return this->execute(first, second); } // definitions of example interface with two function slots. 28: class iexampleinterface: public interface { public: 29: iexampleinterface(); 30: ~iexampleinterface(); private: 31: slot * function1slot; 32: slot * function2slot; }; // interface initialization. instances of two slots are created and added // to interface. 33: iexampleinterface::iexampleinterface(): 34: interface(), 35: function1slot(null), function2slot(null) { 36: this->setid("iexampleinterface"); 37: this->function1slot = new function1slot(); 38: this->function2slot = new function2slot(); 39: this->addslot("function1", *(this->function1slot)); 40: this->addslot("function1", *(this->function2slot)); } // interface deinitialization. instances of two slots are removed and // destroyed. 41: iexampleinterface::~iexampleinterface() { 42: this->removeslot("function1"); 43: this->removeslot("function2"); 44: delete this->function1slot; 45: delete this->function2slot; } first of all, two slots are defined at lines 01-04 and 05-08. one of them is derived from callbackslot which is simple static implementation of slot mechanism and second is derived from dbusslot which implements slot mechanism via d-bus calls. slot constructors at lines 09-11 and 12-16 specify their signature by addition of input and output arguments. for dbusslot derived slots is needed to define way of argument packing and unpacking which does function2slot::execute() method at lines 17-23. iexampleinterface is then defined at lines 28-32 and slots are created or destroyed in its constructor at lines 33-40 and destructor at lines 41-45. finally, to be able to use this interface in whole system, its instance has to be geinformatics fce ctu 2007 40 gal framework registered in component manager using componentmanager::registerinterface() method. current state at the time of this article creation, gal framework is in early stage of design with work on test implementation of designed ideas in progress. test component management, client-side and partially a server-side of d-bus slot implementation is finished. pieces of knowledge gained from test implementation will be fed back to update the current design lately. when all parts of component management and slot mechanisms will be designed and proved by implementation, gal framework may proceed to design of appropriate gis related interfaces which will serve for further implementation of grass modules. currently gal framework uses scons as a build tool, subversion as a source management system, trac as a project management system, wiki and discussion forum, umbrello for uml modeling, swig for bindings generation and libffi and d-bus as a libraries. anyone interested in this approach, who wants to contribute to gal framework at least with comments or new ideas, may contact me at xbarto33@stud.fit.vutbr.cz or create account at http://gal-framework.no-ip.org to get edit rights for wiki and discussion forum. references 1. christopher lenz, dave abrahams and christian boos: trac component architecture, web-page7 2. trac – http://trac.edgewall.org/ 3. scons – http://www.scons.org/ 4. subversion – http://subversion.tigris.org/ 5. umbrello – http://uml.sourceforge.net/index.php 6. swig – http://www.swig.org/ 7. libffi – http://sourceware.org/libffi/ 8. d-bus – http://www.freedesktop.org/wiki/software/dbus/ 7 http://trac.edgewall.org/wiki/tracdev/componentarchitecture geinformatics fce ctu 2007 41 http://gal-framework.no-ip.org http://trac.edgewall.org/wiki/tracdev/componentarchitecture http://trac.edgewall.org/ http://www.scons.org/ http://subversion.tigris.org/ http://uml.sourceforge.net/index.php http://www.swig.org/ http://sourceware.org/libffi/ http://www.freedesktop.org/wiki/software/dbus/ geinformatics fce ctu 2007 42 ukázka možnosti využit́ı programu openjump v soa martin prager institute of geoinformatics faculty of mining and geology, vsb-tuo e-mail: martin.prager.hgf@vsb.cz kĺıčová slova: orechestration, chainging, web services, geoweb, jump, soa abstrakt tento krátký článek by měl poukázat prostřednictv́ım vytvořené extenze na možnost širokého uplatněńı open-source produktu openjump (dř́ıve jump) a jeho modularitu. a to konkrétně v oblasti servisně orientované architektury (soa) se zaměřeńım na řetězeńı webových služeb. úvod servisně orientovaná architektura (soa) přitahuje zájem všech oblast́ı it pr̊umyslu. poháněná standardy jako xml, webové služby a soap, soa rychle proniká do hlavńıch chod̊u aplikaćı zásadńıch pro plněńı business operaćı. s rostoućım počtem dostupných služeb a nároky na efektivnost a obt́ıžnost řešených úloh si už mnohdy nevystač́ıme pouze s výsledky (zdroji) jednotlivých služeb, př́ıpadně s jejich statickým propojeńım. jsme nuceni zač́ıt služby řetězit dynamicky. spojovat je dle aktuálńıch potřeb, možnost́ı uživatele (finance, přesnost výsledk̊u, rychlost ap.). existuj́ı dvě úrovně řetězeńı služeb, známé pod názvy orchestrace a choreografie (zaměřena v́ıce globálně). orchestrace a choreografie je spojena s některými standardy (bpel, ws-cdl, xlang atd.) a organizacemi (oasis, w3c, bpmi apod.) [4]. extenze pro openjump openjump api poskytuje programátor̊um př́ıstup ke všem funkćım včetně i/o rozhrańı, datovým sadám, vizualizaci a prostorovým operaćım. tato možnost z něj dělá vysoce modulárńı a rozšǐritelný produkt. rozš́ı̌reńı jsou zde realizována prostřednictv́ım zásuvných modul̊u. tyto moduly, které jsou po startu ” workbench“ nahrány, můžou mı́t podobu ” plugin“ (položky menu), ” cursor tools“ (nástrojové tlač́ıtka), ” renderers“ (zp̊usoby vykreslováńı dat) a ” datasources“ (zp̊usoby nahráváńı a ukládáńı r̊uzných datových formát̊u) [3]. extenze, kterou jsem nazval ” wsplugin“ poskytuje uživateli gui pro vytvářeńı jednoduchých geinformatics fce ctu 2007 119 ukázka možnosti využit́ı programu openjump v soa sekvenčńıch řetěz̊u webových služeb. umožňuje přidávat služby typu wsdl, wms (wms pouze na konec řetězu) jednotlivě, nebo hromadně s podporou vyhledáváńı v katalogu wsco (http://gisak.vsb.cz/wsco/intranet/). samotný řetěz poté vytvoř́ı uživatel přesunut́ım jednotlivých metod (operaćı ” drag-and-drop“) na pracovńı plochu. obrázek 1 znázorňuje grafické rozhrańı pro propojováńı parametr̊u jednotlivých metod (aktivováno kliknut́ım na konkrétńı metodu). parametry lze propojovat př́ımo, nebo je transformovat s využit́ım jazyka xsl, či za pomoćı xpath výraz̊u. využit́ı xpath (viz http://www.w3.org) výraz̊u lze je patrné jak na následuj́ıćım obrázku, tak v přiloženém ukázkovém bpel procesu. obrázek 1: grafické rozhrańı pro nastaveńı relaćı mezi metodami [4] na obrázku 2 lze vpravo vidět spuštěnou extenzi s již vykonaným řetězem služeb, který vraćı jako odpověd’ gml vrstvu. tato vrstva je automaticky vizualizována v mapovém okně openjump. v př́ıpadě, že na konci řetězu stoj́ı wms služba, je zobrazena s následnou možnost́ı modifikace jej́ıch parametr̊u. obrázek 2: spuštěná extenze v prostřed́ı openjump [4] jak již bylo zmı́něno v úvodu, řetězeńı služeb je spojeno se spoustou specifikaćı. jazyk bpel se stal v posledńıch dvou letech významným standardem, vyzdvihuj́ıćım využit́ı soa z it úrovně na obchodńı úroveň. umožňuje organizaćım automatizováńı jejich obchodńıch proces̊u, prostřednictv́ım orchestrace služeb uvnitř i vně dané organizace [1]. vyvinut firmami microsoft a ibm, standardizován neprofituj́ıćım konsorciem oasis (http://docs.oasis-open.org). geinformatics fce ctu 2007 120 http://gisak.vsb.cz/wsco/intranet/ http://www.w3.org http://docs.oasis-open.org ukázka možnosti využit́ı programu openjump v soa z toho d̊uvodu byla do extenze zabudovaná podpora bpel jazyka. v př́ıpadě, že by chtěl uživatel daný řetěz služeb poskytovat širš́ı veřejnosti, má možnost ho exportovat (importovat pouze soubory exportované z extenze) jako proces do tohoto jazyka. slouž́ı zde k tomu jednoduchý pr̊uvodce, který umožňuje nastavit některé základńı parametry (název procesu, metodu procesu, výběr relevantńıch vstupńıch parametr̊u ap.). kromě těchto základńıch parametr̊u lze nastavit export př́ımo do formátu ” activebpel engine“ stroje (http://www.activeendpoints.com/open-source-active-bpel-intro.htm). bpel proces se jev́ı navenek jako standardńı webová služba, kterou bychom mohli opětovně nač́ıst do extenze a propojit s daľśımi službami, př́ıpadně procesy. dále v extenzi lze nastavit některé vlastnosti prostřed́ı, pracovat s projektem ap. extenze v současnosti umožňuje grafický návrh sekvenčńıho řetězeńı webových služeb s podporou př́ımého zobrazováńı mapových výstup̊u. na rozd́ıl od všech ostatńıch produkt̊u určených k řetězeńı webových služeb, umožňuje do řetězu zařazovat kromě wsdl služeb i služby wms. bpel jazyk je sice standard, ovšem s přenositelnost́ı procesu mezi r̊uznými bpel stroji je problém. většinou každý z těchto stroj̊u potřebuje k interpretaci kromě standardńıch soubor̊u bpel jazyka ještě daľśı podp̊urné soubory, které jsou bohužel typické pro jednotlivé stroje. jak je vidět prostřed́ı openjump lze asi rozš́ı̌rit o cokoliv. k tomuto faktu značně přisṕıvá velmi dobrá dokumentace programu a jeho široká komunita. základńı struktura bpel procesu activities význam jednotlivých element̊u je následuj́ıćı [2]: � partnerlinks – zde jsou definovány služby, které proces využ́ıvá a poskytuje. uvnitř každého partnerlink muśı být vymezena aspoň jedna z dvou možných roĺı. roli samotného procesu určuje atribut myrole a naopak roli partnera atribut partnerrole; � variables – tato část specifikuje proměnné, které proces využ́ıvá. bpel umožňuje deklarovat proměnné třemi zp̊usoby: jako typ wsdl zprávy, jako typ xml schema (jednoduchý nebo složený) a jako xml schema element; � correlationsets – ještě před spuštěńım daného procesu docháźı k vytvořeńı jeho instance. význam tohoto elementu spoč́ıvá v tom, že zabezpečuje doručováńı přicházej́ıćıch zpráv odpov́ıdaj́ıćım instanćım proces̊u; � faulthandlers – jak už název sám o sobě ř́ıká, tento element slouž́ı ke zpracováńı chyb, které nastanou při běhu procesu. chyby mohou být vyvolány explicitně (pomoćı elementu < throw >), nebo implicitně (jako např. výsledek chyby při voláńı nějaké partnerské služby). naopak pro zachytáváńı chyb je využ́ıván element s odpov́ıdaj́ıćım jménem < catch >; � compensationhandler – tento element poskytuje možnost zpětného zotaveńı z chyby v specifikované oblasti. jinými slovy pokusit se o jakési vyčistěńı a navráceńı se do stavu, geinformatics fce ctu 2007 121 http://www.active-endpoints.com/open-source-active-bpel-intro.htm http://www.active-endpoints.com/open-source-active-bpel-intro.htm ukázka možnosti využit́ı programu openjump v soa kde může proces po chybě pokračovat; � eventhandlers – slouž́ı k zachyceńı událost́ı. v bpel jsou dva typy: př́ıchoźı zprávy (koresponduj́ı s wsdl operacemi) a alarmy (aktivovány po uživatelem zadaném čase). v každém tomto elementu muśı být obsažen aspoň jeden zmı́něny typ; � activities – struktura procesu pokračuje tzv. aktivitami (elementy), které už implementuj́ı jeho samotný tok. každý proces má jednu hlavńı aktivitu. bpel obsahuje sadu jednoduchých aktivit, které můžeme skládat a vytvářet tak aktivity složené (viz [2]). ukázkový bpel proces an error occurred while submitting the order. all transactions have been canceled. ’obce32633’ ’nazob_a’ $getareasrequest.obec substring-after(substring-before($getxy_0response.xy,’ ’),’(’) geinformatics fce ctu 2007 122 ukázka možnosti využit́ı programu openjump v soa substring-after(substring-before($getxy_0response.xy,’)’),’ ’) ’epsg:32633’ ’esri:102065’ substring-before($xy2xy_1response.xy,’,’) substring-after($xy2xy_1response.xy,’,’) $getareasrequest.vzdalenost ’voda’ ’nazev’ $getfeaturesgml_2response.countries použité zdroje 1. blanvalvet, s; bolie, j; cardella, m; carey, s; chandran, p; coene, y; geminiuc, k; jurič, m; nguyen, h; poduval, a; pravin, l; thomas, j; todd, d. bpel cookbook: best practivec for soa-based integration and composite applications development. birmingham: packt publishing ltd., 2006. isbn 1-90481133-7 2. oasis. [online]. 2007. dostupný na www: http://www.oasis-open.org 3. openjump. [online]. 2007. dostupný na www: http://openjump.org 4. prager, m. řetězeńı webových služeb v prostřed́ı open source gis. diplomová práce. 2007. ostrava. dostupný na www: http://gisak.vsb.cz/˜pra089/texty/dp pra089 v1 0.pdf geinformatics fce ctu 2007 123 http://www.oasis-open.org/ http://openjump.org http://gisak.vsb.cz/~pra089/texty/dp_pra089_v1_0.pdf performance testing of download services of cosmc jiří horák1, jan růžička1, jiří ardielli2 1institute of geoinformatics, faculty of mining and geology, vsb – technical university of ostrava, czech republic 2tieto czech s.r.o, czech republic jiri.horak@vsb.cz, jan.ruzicka@vsb.cz, jiri.ardielli@tieto.com abstract the paper presents results of performance tests of download services of czech office of surveying, mapping and cadastre according to inspire requirements. methodology of testing is explained, including monitoring performance of reference servers. 26 millions of random requests were generated for each monitored operation, layer and coordinate system. the temporal development of performance indicators are analyzed and discussed. results of performance tests approve the compliance with inspire qualitative requirements for download services. all monitored services satisfy requirements of latency, capacity and availability. the latency and availability requirements are fulfilled with an abundant reserve. no problems in structure and content of responses were detected. keywords: performance testing, spatial data, download service, inspire 1. introduction the development of sdi is based on integration of global, european, national and local spatial initiatives. directive 2007/2/ec of the council and the european parliament establishes the legal framework for setting up and operating an infrastructure for spatial information in europe (inspire). inspire requires to establish and operate following types of network services for the spatial data sets and services [2]: discovery services, view services, download services, transformation services and services allowing spatial data services to be invoked. services provided within the frame of sdi have to be tested to verify the usability and satisfaction of end users. usually assessment of the content and function capabilities represents the primary (qualitative) part of the web service evaluation. nevertheless the critical part may be the evaluation of the service availability, its performance and other aspects of the quality. basic indicators of quality of services (qos) and limit values are given by regulations implementing inspire [6, 7]. full satisfaction of end users usually require to implement higher standards (than those given by inspire) and provide better performance [10]. novel approaches emphasize the central role of users and the importance of elaborated testing of the final user satisfaction [13]. the aim of our testing was to verify the fulfilment of obligatory parameters required by above mentioned regulations and to evaluate the capacity to meet also higher requirements. geoinformatics fce ctu 10, 2013 5 horák, j. et al.: performance testing of download services of cosmc a quantitative evaluation of the service quality should contain both server-side testing and client-side testing. server-side analyses usually explore web server log files, including i.e. click stream analysis [12]. one of the possible analytical objectives is to explore a dependency between a content of rendered image and time for its rendering [5]. results of the server-side tests can be used for optimization of the service based on several techniques (i.e. adjusting of service’s settings, geoweb caching, load balancing). the client-side testing enable to better simulate user conditions, however results are influenced by the client and network status. following types of client-side tests can be specified [3]: test precondition data, functional testing, stress test functionality, capacity testing and performance testing. performance testing is the most well-known form of testing. tests are based on software emulating common users' behaviour or uses some random pattern to access the server. quality of services according to the inspire directive follows the three aspects performance, capacity and availability of services [6]. the availability usually refers to a percentage of time when a user can access the service. inspire based qos does not distinguish if returned results are correct or not. even an error message returned by the monitored service is considered as an evidence for the availability of the service [1]. the first testing of qos for web map services (wms) according to the inspire directive (including cosmc services) was performed in the czech republic in 2008 and 2009 [4]. cosmc (czech office of surveying, mapping and cadastre) has been provided web services (wms) according inspire since april 2008. the implementation has been significantly influenced by establishing the base registers being a part of the czech e-government. the content of this register practically covers the requirements for the following themes: “cadastral parcels“, „buildings“, ”addresses“ and “administrative units“. download services for the “cadastral parcels” theme according to inspire was launched in may 2012. both predefined data set and direct access download service (wfs) are supported [11]. 2. methodology the download services of cosmc were tested during two weeks in may and june 2012. the subject of testing was following services: 1. http://services.cuzk.cz/gml/inspire/cp/epsg-5514/, 2. http://services.cuzk.cz/gml/inspire/cp/epsg-4258/, 3. http://services.cuzk.cz/wfs/inspire-cp-wfs.asp. first and second services provide pre-prepared files using gml version 3.2.1, while the third service represents wfs [11]. following operations were separately tested: • get download service metadata, • get spatial object, geoinformatics fce ctu 10, 2013 6 http://services.cuzk.cz/gml/inspire/cp/epsg-5514/ http://services.cuzk.cz/gml/inspire/cp/epsg-4258/ http://services.cuzk.cz/wfs/inspire-cp-wfs.asp horák, j. et al.: performance testing of download services of cosmc • get spatial data, • describe spatial data set, • describe spatial data object. the time schedule for testing ranges from monday morning to friday evening to cover usual service conditions. the standard load was 10 virtual users. the testing has been done on a client side out of intranet of the service [9]. the client runs out of an intranet of the service and it is not connected to the same router as the service. the list of requirements for operations has been set according to the analysis of regulations and recommendations focused to the services. jmeter (apache-jmeter 2.6 + jmeter plugins 0.5.2, gnu/linux, java openjdk 1.6) software was used for generating of requests and for logging service's responses. random requests were generated for each monitored operation, layer and coordinate system. spatial extents of the downloaded data sets were generated according to the expected behaviour of users. following requests have been generated: 1. getcapabilities, 2. getdata – approx. 12 thousands unique requests for each coordinate system, 3. getfeature to topics of parcel and boundary – approx. 200 thousands unique requests, 4. getfeature to the topic of zoning – approx. 50 thousands unique requests, 5. describefeaturetype and xsd for all themes. the requests were included to one joint jmeter queue to distribute requests of all operations during the testing equally. following parameters have been monitored: date and time of response arrival, time spent till the arrival of the first byte of the response (time to first byte ttfb), latency as a time spent till the arrival of the last byte of the response (time to last byte ttlb), size of the response in bytes (size), the response code of the server (to identify errors and their sources), identification of the group of tests (to recognize the true one from ten used threads for testing where each thread represents one virtual user), the identification of the request (type of operation, coordinate system, etc.), layer name, spatial extent. a response time (rt) was calculated as a difference between ttlb and ttfb. the speed of downloading (data flow, df) was obtained dividing size by rt. simultaneously to the testing of evaluated download services, testing of referencing servers were performed to validate results of our service availability testing. the reason was to eliminate results of testing in time periods influenced by various problems in the network and namely at the client. referencing servers are expected to be highly available in normal conditions. requests to referencing servers were generated in one second interval per each thread and included to the joint queue of jmeter. recorded results were analysed and recognized periods with rt above the limit (indicating an interruption or remarkable traffic slow-down of geoinformatics fce ctu 10, 2013 7 horák, j. et al.: performance testing of download services of cosmc the network connection) were excluded from the processing of results of the download service testing. the following reference servers were used: 1. http://www.google.cz/, 2. http://www.seznam.cz/, 3. http://maps.google.cz/, 4. http://www.mapy.cz/. performance testing was performed with a constant load of ten parallel virtual users. 3. monitored qualitative parameters according to the commission regulation no 1088/2010 regarding download services and transformation services the following quality of service criteria relating to performance, capacity and availability shall apply [7]: 3.1. performance the normal situation represents periods out of peak load. it is set at 90 % of the time. for the get download service metadata operation, the response time for sending the initial response shall be maximum 10 seconds in normal situation. for the get spatial data set operation and for the get spatial object operation, and for a query consisting exclusively of a bounding box, the response time for sending the initial response shall be maximum 30 seconds in normal situation then, and still in normal situation, the download service shall maintain a sustained response greater than 0,5 megabytes per second or greater than 500 spatial objects per second. for the describe spatial data set operation and for the describe spatial object type operation, the response time for sending the initial response shall be maximum 10 seconds in normal situation then, and still in normal situation, the download service shall maintain a sustained response greater than 0,5 megabytes per second or greater than 500 descriptions of spatial objects per second. 3.2. capacity the minimum number of simultaneous requests to a download service to be served in accordance with the quality of service performance criteria shall be 10 requests per second. the number of requests processed in parallel may be limited to 50. 3.3. availability the probability of a network service to be available shall be 99 % of the time. geoinformatics fce ctu 10, 2013 8 http://www.google.cz/ http://www.seznam.cz/ http://maps.google.cz/ http://www.mapy.cz/ horák, j. et al.: performance testing of download services of cosmc 4. results 4.1. get download service metadata ttfb although the tests were undertaken on a client-side, the results show an abundant fulfilling of the criteria (unambiguous fulfilling with abundant reserves). a quotient of requests with a time to the first byte (ttbf) for getcapabilities less than 10s is 0.004%, thus the limit of 10% is far away from the result. during the testing period we did not recognise (in any day or any hour) such results that are close to the required limit. the monitoring by each hour shows that there are two periods during a day with the increased average response time: between 9 am and 2 pm cest (central european summer time) and between 8 and 9 pm cest (figure 1). the average response time is by 38% higher in these peak periods than in the quietest period. the occurrence of the high rt partly corresponds with periods of the highest network traffic. figure 1: minimal, average and maximal latency (ttfb) according to hours for get download service metadata. 4.2. get download service metadata availability the availability of the service has been evaluated from the occurrence of errors and their source. all responses without a hypertext transfer protocol (http) response code 200 (means ok) has been classified as errors. an average count of errors has been 0.004% that is much smaller than the required limit of the service availability (max. 1% of time with unavailable service). the limit has not been crossed or approached in any of the monitored days. geoinformatics fce ctu 10, 2013 9 horák, j. et al.: performance testing of download services of cosmc 4.3. get download service metadata – data structure compatibility the tests did not show any error, so we can confirm that all metadata items are in conformance with a declared structure. 4.4. get download service metadata – expert evaluation of the content two problems were detected in the content of the obtained metadata. the response does not contain information about the used language and listed feature types did not contain a reference to metadata. similar errors have been detected in testing of view services by kliment a cibulka [8], and we can expect such errors when testing other services. according to the test results and subsequent recommendations the provider of the service improved the metadata. 4.5. get spatial data set – ttfb the results show unambiguous fulfilment of the required limit of ttfb (with an abundant reserve). responses with a longer latency (ttfb) are occurred in a few cases. this conclusion is valid for all monitored layers, days and hours. 4.6. get spatial data set – data flow under 0.5 mb/s according to the evaluation of the data download speed after the first packet of data is received we can approve a fulfilment of the requirement. the 99.7% of the requests are under the required limit (figure 2). there are no results for any layer, day or hour that would imply to fail in the required criterion. figure 2: occurrence of data flows bellow and above 0.5 mb/s in monitored days for get spatial data set. geoinformatics fce ctu 10, 2013 10 horák, j. et al.: performance testing of download services of cosmc 4.7. get spatial data set – availability the results are fully satisfactory and values are usually 100 times better than the required limit. this conclusion is valid to all monitored day or hour. average values of the error occurrence or service unavailability are 0.004% for the most of operations. the highest value 0.021% was recorded at may 31st that is still much smaller than the limit 1%. 4.8. get spatial data set – data structure compatibility tests for data structures were focused on a conformance of obtained data with referenced xml schemes. tests did not show any error for all tested files. 4.9. get spatial object – ttfb the results show satisfactory fulfilment of the required limit of ttfb. responses with a longer latency are occurred in few isolated cases. this statement is true for all monitored layers, days and hours. the graph depicts the average time of ttfb ranges between 500 and 727 ms (figure 3). in several days we did not record any response with ttfb above the limit of 30s. in other days such unsatisfactory responses occur rarely (from 1 to 6, maximum is 9 cases at 31st of may). figure 3: minimal, average and maximal latency (ttfb) according to days for get spatial object. note: values for saturday and sunday (2nd-3rd of june) are calculated from small amount of requests, because the monitoring was not performed for whole day. geoinformatics fce ctu 10, 2013 11 horák, j. et al.: performance testing of download services of cosmc 4.10. get spatial object – data flow below 0.5 mb/s according to the evaluation of the data download speed after the first packet of data is received we can approve a satisfaction of the requirement with an abundant reserve. the 99.7% of the requests are under the limit. the situation is similar for all layers – results are 99.6% for cadastralboundary, 99.6% for cadastralparcel and 100% for cadastralzoning. the rule has not been broken or even reached in any of the monitored days or hours. 4.11. get spatial object – availability the results are fully satisfactory and values are usually 100 times better than the required limit. this conclusion is valid to all monitored day or hour. average values of the error occurrence or service unavailability are 0.004% for the most of operations. the highest value 0.022% was recorded at may 31st that is still much smaller than the limit 1%. 4.12. get spatial object – data structure compatibility tests for data structures were focused on a conformance of obtained data with referenced xml schemes. tests did not show any error for all tested files. 4.13. describe spatial data set – ttfb the results approve the fulfilment of the required limit of ttfb because 98.7% of requests satisfied the limit. the fulfilment of the criteria (at least 90% of data flows more than 0.5 mb/s) can be theoretically endangered in case of two schemas (basicfeature.xsd and utilityandgovernmentalservices.xsd). most of requests have been responded within a first return packet, thus the size of the response is small enough. unfortunately, measuring of the speed for such small amount of data may not be reliable. in the case of basicfeature.xsd about 89% of responses are transferred with the required speed. similarly the estimation of data flow for utilityandgovernmentalservices.xsd is about 82% of responses with the satisfactory speed. if we take into account the amount of data transferred within the first packet (the response time is measured between the time of delivering of the first and the last part of response which may differ from time of delivering the first byte) the criteria should be probably fulfilled. 4.14. describe spatial data set – availability the results are fully satisfactory and values are extremely lower than the required limit. this statement is valid to all monitored day or hour. average values of the error occurrence or service unavailability are 0.003% for the most of operations. the highest value 0.02% was recorded at may 31st that is still much smaller than the limit 1%. 4.15. describe spatial data set – data structure compatibility all xml schemes are identical to the schemes available at the inspire portal. geoinformatics fce ctu 10, 2013 12 horák, j. et al.: performance testing of download services of cosmc 4.16. describe spatial object type – ttfb the results show satisfactory fulfilment of the required limit of ttfb. responses with a longer response time are occurred in few isolated cases. this statement is true for all monitored layers, days and hours. 4.17. describe spatial object type – data flow under 0.5 mb/s according to the evaluation of the data download speed after the first packet of data is received we can approve a satisfaction of the requirement with an abundant reserve. 99.8% of the requests fulfil the limit. there are no results for any layer, day or hour that would imply to fail in the required criterion. 4.18. describe spatial object type – availability the results are fully satisfactory and values are much lower than the required limit. this statement is valid to all monitored days or hours. average values of the error occurrence or service unavailability are 0.004% that is significantly below the limit 1%. 4.19. describe spatial data type – data structure compatibility all responses are in conformance with xml schemes verified in the operation describe spatial data set. 5. conclusion the aim of the analysis was to evaluate the implementation of requirements of the inspire directive for the download service of cosmc. operations get download service metadata, get spatial object, get spatial data, describe spatial data set and describe spatial object type were tested. the testing included two performance tests, each running for one week, performed from 28th of may to 10th of june 2012. the testing was done on a client-side out of an intranet of the service. jmeter software was used with a randomly generated set of requests for monitored operations, layers and coordinate systems. results were stored into logs that were used for the analysis. there were generated about 26 millions of requests from testing clients from vsb-technical university of ostrava for load tests altogether. responses logged in a time of recognized problems in the intranet of clients (identified by parallel testing of an availability of four reference servers) were excluded from the evaluation. results approve the fulfilment of all criteria for download services required by the inspire directive and corresponding regulations. the latency and availability of tested services are excellent providing abundant reserves to satisfy higher requirements than inspire limits. geoinformatics fce ctu 10, 2013 13 horák, j. et al.: performance testing of download services of cosmc acknowledgment we are appreciative employees of czech office for surveying, mapping and cadastre for their cooperation and support of the analysis. references [1] ardielli, j., horak, j. & ruzicka, j. (2012): view service quality testing according to inspire implementing rules. elektronika ir elektrotechnika, issue 3, pp. 69-74. [2] directive (2007): directive 2007/2/ec of the european parliament and of the council of 14 march 2007 establishing an infrastructure for spatial information in the european community (inspire), p. 14. [3] hicks, g., south, j. & oshisanwo, a. (1997): o. automated testing as an aid to systems integration. bt technology journal, no. 15, pp. 26–36. [4] horak, j., ardielli, j. & horakova, b. (2009): testing of web map services. international journal of spatial data infrastructures research, pp. 1–19. [5] horak, j., ruzicka, j., novak, j., ardielli, j. & szturcova, d. (2012): influence of the number and pattern of geometrical entities in the image upon png format image size. lecture notes in computer science, vol. 7197 lnai, issue 2, pp. 448-457. [6] inspire (2009): commission regulation (ec) no 976/2009 of 19 october 2009 implementing directive 2007/2/ec of the european parliament and of the council as regards the network services, p. 10. [7] inspire (2010): commission regulation (eu) no 1088/2010 of 23 november 2010 amending regulation (ec) no 976/2009 as regards download services and transformation services, l323/1, p. 10. [8] kliment, t. & cibulka, d. (2011): testovanie vyhladavacich a zobrazovacich sluzieb podla inspire poziadavek. proceedings of gis ostrava 2011, pp. 1–9. [9] kliment, t., tuchyňa, m. & kliment, m. (2012): methodology for conformance testing of spatial data infrastructure components including an example of its implementation in slovakia. slovak journal of civil engineering, vol. xx, no. 1, pp. 10-20. [10] mildorf, t. & cada, v. (2012): reference data as a basis for national spatial data infrastructure. geoinformatics fce ctu 9, pp. 51-61. [11] polacek, j. & soucek, p. (2012): implementing inspire for the czech cadastre of real estates. geoinformatics fce ctu 8, pp. 9-16. [12] sun, j. & xie, h. (2012): mining sequential patterns from web click–streams based on position linked list. proceedings of asian–pacific youth conference on communication technology, pp. 466–470. [13] voldan, p. (2011): developing web map application based on user centered design. geoinformatics fce ctu 7, pp. 131-141. geoinformatics fce ctu 10, 2013 14 ___________________________________________________________________________________________________________ geoinformatics ctu fce 259 a structured-light approach for the reconstruction of complex objects ilias kalisperakis, lazaros grammatikopoulos, elli petsa, george karras* department of surveying, technological educational institute of athens (tei-a), gr-12210 athens, greece ikal@teiath.gr, lazaros@teiath.gr, petsa@teiath.gr * department of surveying, national technical university of athens (ntua), gr-15780 athens, greece gkarras@central.ntua.gr keywords: photogrammetric scanning, 3d reconstruction, triangulation, camera calibration abstract: recently, one of the central issues in the fields of photogrammetry, computer vision, computer graphics and image processing is the development of tools for the automatic reconstruction of complex 3d objects. among various approaches, one of the most promising is structured light 3d scanning (sl) which combines automation and high accuracy with low cost, given the steady decrease in price of cameras and projectors. sl relies on the projection of different light patterns, by means of a video projector, on 3d object sur faces, which are recorded by one or more digital cameras. automatic pattern identification on images allows reconstructing the shape of recorded 3d objects via triangulation of the optical rays corresponding to projector and camera pixels. models draped with realistic phototexture may be thus also generated, reproducing both geometry and appearance of the 3d world. in this context, subject of our research is a synthesis of state-of-the-art as well as the development of novel algorithms, in order to implement a 3d scanning system consisting, at this stage, of one consumer digital camera (dslr) and a video projector. in the following, the main principles of structured light scanning and the algorithms implemented in our system are presented, and results are given to demonstrate the potential of such a system. since this work is part of an ongoing research project, future tasks are also discussed. 1. introduction recently, we are witness to a rapidly increasing demand for 3d content in a variety of application fields and scales, which range from city modeling [1], industrial metrology, inspection and quality control [2] or robotics [3] to 3d printing and rapid prototyping, augmented reality or entertainment. of course, among these fields cultural heritage recording and documentation occupy an outstanding position [4] (see also cited projects [5, 6]). a common representation of 3d content is through detailed 3d digital surface models (usually in the form of point clouds or 3d surface triangular meshes), rendered with photo-texture from real imagery. ideally, the 3d models must be generated rapidly and accurately by automatic techniques. responding to this demand are different approaches and technologies for 3d model acquisition, typically classified in the two main categories of “passive” and “active” techniques. the former methods rely on the processing of recorded ambient radiation (usually light reflectance); they include stereovision and, more generally, image based modeling [7, 8, 9, 10], but also shape from silhouettes [11] and shape from shading [12, 13]. active methods for 3d surface reconstruction, on the other hand, employ radiation – mainly laser or light – emitted onto the object surface and triangulated with the image optical rays. among these approaches, most common are 3d laser scanning, single-image slit-scanning and structured light scanning. structured light scanning (sl), which is the topic of this paper, thus rests on the principle of triangulation of optical rays. these systems consist of a video projector which projects a sequence of specific light patterns (black-and-white or grayscale stripes, colored line patterns, specific targets etc.), while one or more digital cameras record the deformation of the patterns projected onto the objects (which depends on the shape of the 3d surface). suitable encoding of the projected patterns allows establishing correspondences between camera and projector pixels. subsequently, knowledge of the interior (perspective) and the exterior geometry of projector and cameras allow 3d surface reconstruction with triangulation of the homologous optical rays. a main drawback of existing commercial solutions (both laser and sl scanners) is high cost. recently, however, several diy (do it yourself) systems have appeared which – utilizing means like a video camera and a simple lighting system (e.g. [14]), one or two webcams and a simple linear laser [15, 16], or a webcam and a video projector (http://mesh.brown.edu/byo3d/source.html) – allow 3d reconstruction of small objects at very low cost. regarding sl scanning, publications [17, 18, 19, 20], among others, have demonstrated that consumer video projectors and off-the ___________________________________________________________________________________________________________ geoinformatics ctu fce 260 shelf digital cameras can also be utilized by self-developed algorithms to create sl scanning systems, which provide accurate and reliable results, directly comparable to those from high cost commercial systems. thus, although several commercial structured light systems do exist, significant on-going research regarding open issues in the development and implementation of novel 3d scanning systems indicates that 3d shape reconstruction with sl methods is far from being considered as a fully solved or outdated problem. to name a few, innovations concern the form of projection patterns in order to either improve scanning accuracy [21, 22] or scanning in real time [23, 24, 25, 26 and 27]. in this context, in [28] a single pattern is used to record moving objects. moreover, in [29, 30] the question of using uncalibrated camera-projector systems by solving simultaneously the problems of system calibration and registration of scans from different viewpoints is addressed. characteristic for this active research activity is also the existence of the dedicated annual workshop procams (ieee international workshop on projector-camera systems: www.procams.org). here, a structured-light approach for 3d reconstruction is proposed which is based on 3d triangulation of optical rays generated by a video projector and recorded by a high resolution digital camera. the system is calibrated in one step by projecting a colored chess-board pattern on top of a targeted planar surface. the planar calibration board is rotated in space producing different perspectives captured by the camera. both the targets and corners of the projected pattern are automatically identified on the images with sub-pixel accuracy, allowing precise simultaneous estimation of internal and external system parameters in a bundle adjustment. the scanning process is performed through projection of binary gray coded pattern (horizontal and vertical stripes) onto the unknown 3d surface. in this way, matching of homologue projector and camera pixels can be performed without redundancy. subpixel correspondences are also established to increase precision and smoothness of the final 3d reconstruction, which is obtained through standard photogrammetric space intersection. in section 2 the main components of our 3d scanner are described, while sections 3 and 4 give details regarding the algorithms for calibration and automatic establishment of pixel correspondences. the approach has been used for recording two small statues, and results are presented in section 5. conclusions and future tasks are discussed in section 6. 2. system description the hardware components of the system implemented for the tests of this contribution are: a canon eos 400d dslr camera (resolution 3888 2592) a mitsubishi xd600 dlp video projector (resolution 1024 768) a calibration board (white non-reflective planar object with at least 4 black-and-white symmetrical targets printed with a laser printer). during scanning, the camera–projector relative position has to be fixed. the system is flexible regarding its hardware components as it may incorporate any combination of consumer video projector and digital camera (provided they can be controlled by a personal computer). additionally, it can be adapted to scan objects at different scales by changing the size of the calibration board and the distance between camera and projector (baseline) as well as by suitably adjusting the focus of both devices. 3. calibration essential step in 3d triangulation with sl systems is their calibration, i.e. the determination of the interior orientation (focal length, principal point position, lens distortion parameters) of video projector and digital camera as well as their (scaled) relative position in space. typically, a camera-projector calibration is carried out in two separate steps. first, the camera interior orientation is estimated, and next projector interior and relative orientations are found. in this context, [31] use a planar surface and a combination of printed circular control points and projected targets to perform plane-based calibration. then, using the epipolar constraint, homologies between projector and camera pixels are established and the projector is calibrated. in [32] the camera is calibrated and then a full sl scanning of a planar object containing targets is performed to obtain correspondences between the projector and the camera pixels. this procedure is repeated with different orientations of the planar surface, and synthetic images of what the projector would capture as a virtual camera are computed and used for its calibration. finally, [33] adopts the same technique of “virtual” projector images, while an alternative approach is also proposed, in which a calibrated stereo camera configuration is used to compute the 3d coordinates of a projected pattern on different planar object orientations; the acquired 3d coordinates are used in a subsequent step of projector calibration. http://www.procams.org/ ___________________________________________________________________________________________________________ geoinformatics ctu fce 261 figure 1: typical image for camera–projector system calibration. here a simultaneous estimation of camera and projector calibration along with their relative orientation is proposed. the implemented algorithm includes: the projection of a chessboard-like color pattern (red and white tiles) onto a planar object containing at least 4 (black and white) printed targets, and the recording of these projections by the camera (fig. 1). this is repeated for different successive orientations of the planar surface. the automatic detection (with sub-pixel accuracy) of targets and corners on the imaged color pattern. a bundle adjustment for optimal estimation of calibration parameters (initial values are found after [34], in which our team presented a fully automatic camera calibration toolbox, now available on the internet). 3.1 detection of printed targets to detect the 4, or more, printed targets (fig. 2, above) on the calibration images, harris corners are extracted. subsequently, possible areas of targets are identified by applying an intensity threshold to the three rgb channels. detected interest areas are expanded using morphological dilation, corners outside them are discarded. however, due to the symmetrical form of the chessboard-like targets, more than one interest points are assigned to each actual target. as seen in fig. 2 (below left), the 7 peaks of the “cornerness” measure corresponds to 7 harris points extracted at the corners and the centre of the target. what differentiate the central point from the rest are the strongly symmetrical intensity values of its neighborhood. here, a descriptor measuring the symmetry of a window around every image pixel is computed as the inverse of the norm of the intensity differences of anti-diametric pixels with respect to the centre of the window. in order to avoid homogeneous image regions, the descriptor values are multiplied by the local standard deviation and the “cornerness” measure. fig. 2 (below right) presents the descriptor values for a target area. in this way the central corner (which shows the highest “symmetry descriptor”) is assigned to the target. finally, a point is detected at this position with sub-pixel accuracy. it is noted that mirrored targets are used in the left and right sides of the planar surface to allow unique identification of target ordering under different perspectives. 3.2 detection of projected patterns detection of the projected color grid is more straightforward, as there are no severe perspective distortions in different views (horizontal lines are projected nearly horizontal). the projected red-white chess-board pattern is differentiated from the printed black and white calibration targets by a threshold in the hsv color space. chess-board corners are first detected by normalized cross correlation template matching with a predefined pattern. in this way, by connecting and labeling pixels of high correlation values, blobs are created, and then corners are estimated with sub-pixel accuracy at the centre of gravity of each blob. finally, nodes are brought in correspondence to the respective projected pattern nodes by means of an ordering process guided by their convex hull. ___________________________________________________________________________________________________________ geoinformatics ctu fce 262 figure 2: calibration target (above), harris „cornerness‟ measure (below left), „symmetry descriptor‟ (right). 3.3 bundle adjustment through the detection procedure, point matches are established between the camera and projector frames. during the calibration process the planar object changes orientation and position in 3d space against a fixed camera-projector system. of course, this is equivalent to a rigid body movement of the structured light system against a plane fixed in space. in this sense, the 4 targets correspond to 4 fixed points with known coordinates on a plane. the corners of the projected pattern in every successive frame correspond to different object points, all lying on the plane defined by the 4 targets. thus, a bundle adjustment solution is feasible. targets serve as full control points and grid corners as tie points with two unknown plane coordinates. unknown are also the 6 camera-to-projector orientation parameters and the 10 (2 5) interior orientation parameters of the camera and the projector. in all calibration tests the standard error of the adjustment was below 0.2 pixels. table 1 shows the calibration data for the experimental application described in section 5. ο = 0.08 pixel camera projector cx (pix) 4620.85 1.87 2177.01 1.16 xo (pix) 43.90 1.02 16.02 0.51 yo (pix) 3.25 0.60 462.96 0.62 k1( 10 09) 2.ř4 ± 0.07 7.32 ± 0.07 k2( 10 ) 1.2ř ± 0.1ř 4ř.71 ± 2.13 scaled relative orientation bx (cm) 30.27 ± 0.01 by (cm) ř.7ř ± 0.00 bz (cm) 11.40 ± 0.02 ◦ 5.01 ± 0.02 ◦ 30.40 ± 0.02 ◦ 2.04 ± 0.01 table 1: calibration results (from 6 images) ___________________________________________________________________________________________________________ geoinformatics ctu fce 263 4. matching camera and projector pixels crucial step in sl systems is, of course, the establishment of correct correspondences between projector and camera pixels, since obviously the accuracy of this matching procedure affects directly the accuracy of 3d reconstruction. our approach implemented so far is based on [20] and uses successive projections of binary gray-code patterns (fig. 3), i.e. black-and-white vertical and horizontal stripes of variable width (see fig. 4). each projection is recorded by the camera, and dark and light areas are identified on the image. since each projector pixel is characterized by a unique sequence of black and white values, identification of the sequence of dark and light values for each camera pixel directly allows establishing camera–projector pixel homologies. in particular, log2(n) patterns are required to uniquely model n different labels. thus, for a 1024 768 projector 10 patterns are needed for each direction. in order to determine whether an image pixel corresponds to a dark or a light projected area in a more robust way, the inverse of each gray-code pattern is also projected. consequently, a total of 40 different patterns are used. a pixel is characterized as illuminated with white color from a specific pattern if the difference of its intensity values corresponding to successive normal and negative patterns is positive. the rest of the pixels are assigned to dark values (fig. 5). finally, pixels with absolute differences less than a threshold (e.g. 4 gray values) can be rejected as outliers. figure 3: binary gray code. figure 4: examples of black and white vertical and horizontal projection patterns. figure 5: detection of illuminated and dark areas. ___________________________________________________________________________________________________________ geoinformatics ctu fce 264 due to differences in the camera and projector resolutions (cameras have usually higher resolution) several camera pixels may be assigned to the same projector pixel. this results in 3d reconstructions with discrete steps and strong moiré-like artifacts (fig. 6, left). thus, to obtain more accurate and smooth 3d reconstructions each camera pixel must be associated with a unique sub-pixel point on the projector frame. different approaches to obtain such sub-pixel correspondences exist in literature (for a taxonomy of state-of-the-art methods see [35]). here, we have adopted the approach of [20] who, after establishing correspondences at pixel level, interpolate the integer projector coordinate values by means of a 1d averaging filter (7 1 pixels) in the prominent pattern direction. in our implementation 2d orthogonal convolution windows (11 7, 15 7) are used for averaging in order to obtain smoother results and consistency among different scan-lines (fig. 6, centre and right). once pixel (or sub-pixel) matches are established, the 3d position of depicted object points is computed through simple triangulation of the corresponding optical rays. color values of these points are also directly available from the camera images; thus a 3d colored point cloud can be reconstructed (fig. 7). figure 6: reconstruction improvement by sub-pixel correspondences. figure 7: examples of colored 3d point clouds. 5. experimental results to demonstrate the effectiveness of the sl system implemented to this stage, a scanning of two small statues ( 20 cm and 12 cm in height) was performed. in each case 11 separate scans were carried out, and the reconstructed point clouds were registered with respective rms deviations of 60 m and 80 m. finally, 3d mesh models were created for each object, seen in figs. 8 and 9. ___________________________________________________________________________________________________________ geoinformatics ctu fce 265 6. future tasks in this contribution a realization of a structured light system with consumer hardware components (a video projector and a digital camera) was described. algorithmic details were discussed regarding system calibration and matching of pixels among projector and camera frames. experimental results were also presented showing the potential of such lowcost systems. innovations of the proposed approach can be found in the calibration process which is performed in one step for camera and projector, considerably simplifying the use of such a system. future tasks of our ongoing research include introduction of a second camera; investigation of alternative methods for obtaining sub-pixel accuracy; automatic detection of occlusions, hole-filling and combination of structured light techniques with dense-stereo matching algorithms in search for higher accuracy; automatic registration of scans acquired from different viewpoints for obtaining full 3d representation of scanned objects; and, finally, investigation of the potential of un-calibrated camera-projector systems. figure 8: different views of the first rendered 3d model (obtained from 11 scans). figure 9: different views of the second rendered 3d model (obtained from 11 scans). 7. references [1] xiao, j., fang, t., zhao, p., lhuillier, m., quan, l.: image-based street-side city modeling, acm transaction on graphics (tog), 28(2009) 5. [2] teutsch, c.: model-based analysis and evaluation of point sets from optical 3d laser scanners, phd thesis, magdeburger schriften zur visualisierung, shaker verlag, 2007. [3] claes, k., bruyninckx, h.: robot positioning using structured light patterns suitable for self calibration and 3d tracking, proceedings of the 2007 international conference on advanced robotics, jeju, korea. ___________________________________________________________________________________________________________ geoinformatics ctu fce 266 [4] cignoni, p., scopigno, r.: sampled 3d models for ch applications: a viable and enabling new medium or just a technological exercise? acm journal on computing and cultural heritage, 1(2008)1. [5] cyark project. http://archive.cyark.org/about [6] 3d-coform. http://www.3dcoform.eu/ [7] seitz, s.m., curless, b., diebel, j., scharstein, d., szeliski, r.: a comparison and evaluation of multi-view stereo reconstruction algorithms, proceedings cvpr, 1(2006), 519-528. [8] strecha, c., fransens, r., van gool, l.: combined depth and outlier estima tion in multi-view stereo, proceedings cvpr, 2(2006), 23942401. [9] furukawa, y., ponce, j.: accurate, dense, and robust multi-view stereopsis, proc. cvpr, 2007, 1-8. [10] vu, h.h., keriven, r., labatut, p., pons, j.-p.: towards high-resolution large-scale multi-view stereo, proc. cvpr, 2009. [11] mercier, b., meneveaux, d.: shape from silhouette: image pixels for marching cubes, journal of wscg, 13(2005), 112-118. [12] prados, e., faugeras, o.: perspective shape from shading and viscosity solutions, proc. 9th ieee iccv, vol. ii, nice, france, october 2003, 826-831. [13] tankus, a., sochen, n., yeshurun, y.: a new perspective on shape from-shading, proc. 9th ieee iccv, vol. ii, nice, france, october 2003, 862-869. [14] bouguet, j.-y., perona, p.: 3d photography on your desk, proc. iccv 1998, 43-50. [15] winkelbach, s., molkenstruck, s., wahl, f.m.: low-cost laser range scanner and fast surface registration approach, proc. dagm ‟06, lecture notes in computer science, vol. 4174, springer, 2006, 71ř-728. [16] prokos, a., karras, g., petsa, e.: automatic 3d surface reconstruction by combining stereovision with the slitscanner approach, international archives of photogrammetry, remote sensing and spatial information sciences, vol. xxxviii, part 5, 2010, 505-509. [17] gühring, j.: dense 3-d surface acquisition by structured light using off-the-shelf components, videometrics and optical methods for 3d shape measurement, 2001, 220-231. [18] rocchini, c., cignoni, p., montani, c., pingi, p., scopigno, r.: a low cost 3d scanner based on structured light, eurographics 2001, 20 (2001)3. [19] tchou, c.: image-based models: geometry and reflectance acquisition systems, uc berkeley, m.sc. thesis, 2002. [20] scharstein, d., szeliski, r.: high-accuracy stereo depth maps using structured light, proc. ieee cvpr, 1(2003), 195-202. [21] fechteler, p., eisert, p., rurainsky, j.: fast and high resolution 3d face scanning, proc. icip, 2007. [22] peng, t., gupta, s.k.: model and algorithms for point cloud construction using digital projection patterns, journal of computing and information science in engineering, 7(2007)4, 372-381. [23] koninckx, t., griesser, a., van gool, l.: real-time range scanning of deformable surfaces by adaptively coded structured light, in: 3-d digital imaging & modelling, banff canada, 2003, 293-300. [24] peisen, s., zhang, h.s.: fast three-step phase-shifting algorithm, applied optics, 45(2006)21, 5086-5091. [25] chen, s.y., li, y.f., zhang, j.: vision processing for realtime 3-d data acquisition based on coded structured light, ieee transactions on image processing, 17(2008)2, 167-176. [26] liu, k., wang, y., lau, d.l., hao, q., hassebrook, l.g.: dual-frequency pattern scheme for high-speed 3-d shape measurement, optical express, 18(2010), 5229-5244. [27] zhang, s.: recent progresses on real-time 3-d shape measurement using digital fringe projection techniques, optics and lasers in engineering, 40(2010), 149-158. [28] schmalz, c., angelopoulou, e.: a graph-based approach for robust single-shot structured light, ieee international workshop on projector-camera systems, 2010. [29] furukawa, r., kawasaki, h.: uncalibrated multiple image stereo system with arbitrarily movable camera and projector for wide range scanning, proc. 5th 3dim ‟05, 2005, 302-309. [30] aliaga, d., xu, y.: photogeometric structured light: a self-calibrating and multi-viewpoint framework for accurate 3d modeling, proc. ieee cvpr, 2008. [31] gao, w., wang, l., hu, z.: flexible calibration of a portable structured light system through surface plane, acta automatica sinica, 34(2008)11, 1358-1362. [32] zhang, s., huang, p.s.: novel method for structured light system calibration, optical engineering 45(2006)8. [33] knyaz, k.l.: multi-media projector – single camera photogrammetric system for fast 3d reconstruction, international archives of photogrammetry, remote sensing and spatial information sciences, vol. xxxviii, part 5(2010), 343-348. [34] douskos, v., grammatikopoulos, l., kalisperakis, i., karras, g., petsa, e.: fauccal: an open source toolbox for fully automatic camera calibration. xxii cipa symposium on digital documentation, interpretation & presentation of cultural heritage, kyoto, japan, october 2009. [35] salvi, j., fernandez, s., pribanic, t., llado, x.: a state of the art in structured light patterns for surface profilometry, pattern recognition, 43(2010)8, 2666-2680. http://archive.cyark.org/about http://www.3dcoform.eu/ http://www.ncbi.nlm.nih.gov/pubmed?term=%22chen%20sy%22%5bauthor%5d http://www.ncbi.nlm.nih.gov/pubmed?term=%22li%20yf%22%5bauthor%5d javascript:al_get(this,%20'jour',%20'ieee%20trans%20image%20process.'); projekt openstreetmap z pohledu geoinformatika daniel bárta institute of geoinformatics, vsb-tu of ostrava daniel.barta.st2@vsb.cz keywords: openstreetmap, open geodata kĺıčová slova: openstreetmap, otevřená geodata abstract this thesis discusses conditions suitable for creation of open-licensed geographic data, distinguishes different levels of opennes. it focuses the openstreetmap community project, which has the aim to create and provide free geographic data. this paper gives a brief insight to the project, presents its key features and its history. abstrakt práce pojednává o podmı́nkách vhodných pro vytvářeńı geodat se svobodnou licenćı, rozlǐsuje r̊uznou úroveň jejich otevřenosti. dále se zaměřuje na komunitńı projekt openstreetmap, který vytvář́ı a udržuje svobodná geografická data. poskytuje prvotńı náhled na projekt, seznamuje s jeho kĺıčovými vlastnostmi a vývojem. od open source k open geodata koncem 80. let 20. stolet́ı začala vznikat, snad nejprve mezi programátory, potřeba vytvářet svobodné/otevřené programové vybaveńı. snahy jednotlivc̊u o vytvořeńı vhodných licenćı pro publikováńı programů, propagace a př́ıpadně hájeńı práv autor̊u a uživatel̊u byly později spojeny pod hlavičkou nadace free software foundation gnu, nebo neziskové organizace open source initiative. s odstupem času můžeme ř́ıci, že mnohé projekty vzešlé z této myšlenky geinformatics fce ctu 2008 91 projekt openstreetmap z pohledu geoinformatika hraj́ı významnou roli v mnoha odvětv́ıch informačńıch technologíı – nápad několika nadšenc̊u se změnil ve fenomén. pro př́ıklad uved’me jádro operačńıho systému gnu/linux, který je š́ı̌ren pod často už́ıvanou licenćı gnu gpl, která je dnes ve třet́ı verzi. kĺıčovým prvkem všeobecného rozš́ı̌reńı otevřeného softwaru byl přesun hardwarového vybaveńı ze sál̊u výpočetńıch středisek na každý pracovńı st̊ul v zaměstnáńı či domovech. obdobným procesem prošel i hardware geoinformatiky a př́ıbuzných obor̊u. v devadesátých letech 20. stolet́ı byl uveden do provozu a zpř́ıstupněn veřejnosti projekt americké armády navstar gps [1]. přij́ımače družicového signálu se z ponorek a amerických letadlových lod́ı postupně dostávaj́ı do každého motorového vozidla, do rukou turisty. prvotńı potřeba běžných uživatel̊u byla zjǐst’ováńı polohy a navigace, později přibyla i zábava jako např́ıklad geocaching. neńı tedy žádný d̊uvod proč by obdobný proces jako bylo osvobozeńı programového kódu nemohl zač́ıt v oblasti geoinformatiky a také, což je i tématem této práce, osvobozeńı geodat. otevřenost geodat free software foundation popisuje možnost nahĺıžet na poč́ıtačové programy skrze mı́ru svobody, s jakou lze s nimi pracovat.[2] analogíı tohoto př́ıstupu, použitou na geodata, pak můžeme uvažovat: i. svoboda možnost zobrazit data (metadata), za jakýmkoliv účelem. těchto možnost́ı je dnes mnoho, jak prostřednictv́ım produkt̊u komerčńıch subjekt̊u, tak státńıch organizaćı. pro zobrazováńı dat využ́ıvaj́ı bud’ účelově sestavený nebo standardizovaný mapserver. využit́ı dat je d́ıky licenci možné pouze pro zobrazeńı a osobńı potřebu, informace o metadatech jsou k dispozici jen z mlhavých dedukćı uživatel̊u. nejjednodušš́ı zp̊usob provedeńı rozhrańı mapserveru jsou v běžném internetovém prohĺıžeči zobrazitelné webové stránky na technologíıch html, javascript, ajax. jsou př́ıstupné zpravidla veřejně a bez registrace, bývaj́ı přizp̊usobené pro uživatele avšak nemaj́ı rozhrańı vhodné a standardizované pro strojové zpracováńı. české komerčńı mapové servery obsahuj́ı obvykle družicové nebo letecké sńımky, automapu, uličńı mapy měst, turistické mapy nebo trasy, př́ıpadně staré mapy z 19. stolet́ı. jsou začasto omezené územı́m česka, př́ıpadně nejbližš́ıch soused̊u. př́ıkladem může být: � http://amapy.atlas.cz � http://mapy.seznam.cz � http://supermapy.centrum.cz zahraničńı mapservery obsahuj́ıćı relevantńı data k územı́ české republiky jsou typické s nižš́ı kvalitou a stář́ım geodat, nebot’ jejich p̊uvodci jsou ciźı organizace, maj́ıćı těžǐstě zájmu mimo čr. poskytovány jsou zejména družicové nebo letecké sńımky, automapy a uličńı mapy měst. např́ıklad: � http://maps.google.com geinformatics fce ctu 2008 92 http://amapy.atlas.cz http://mapy.seznam.cz http://supermapy.centrum.cz http://maps.google.com projekt openstreetmap z pohledu geoinformatika � http://maps.yahoo.com � http://maps.live.com výjimečně se na českém internetu objevuj́ı netypické služby, zpř́ıstupňuj́ıćı d́ılč́ı části státńıho mapového d́ıla jako např́ıklad: � vizualizace uir-adr na rzm10 od mpsv1 pokročileǰśı zp̊usob výměny vizualizovaných geodat poskytuje služba standardu wms provozovaná obvykle spolu s mapserverem, kterou lze snadno dále využ́ıvat v programovém vybaveńı nebo automatizovaně zpracovávat. např́ıklad[14]: � wms cenia2 (neposkytuje korektńı výstup pro epsg:4326) � wms oblastńı plán rozvoje lesa úhul3 � wms katastrálńı mapa čúzk4 ii. svoboda možnost studovat data a metadata a adaptovat je ke svým potřebám. předpokladem je př́ıstup k zdrojovým dat̊um. zde už je možnost́ı výrazně méně. můžeme sáhnout po ucelených komerčńıch sadách subjekt̊u (čúzk viz tabulka, arcdata, t-mapy, ...). u těchto dataset̊u je však licence obvykle limitována – tedy k dispozici je sice forma zdrojových dat, ale zp̊usob využij́ı je podstatně omezen. název baĺıku dat cena za územı́ čr zabaged polohopis 3.700.000 kč zabaged výškopis 1.000.000 kč ortofotomapa čr (0,5m/px) 2.400.000 kč ukázka ceny dat, ceńıku čúzk platný od 1. 1. 2007, převzato z [3] pro některá data rastrového datového modelu (např. letecké sńımkováńı ve viditelném spektru) lze poskytnout zdrojová data skrze wms službu. vhodný zp̊usob poskytováńı zdrojových dat vektorového datového modelu je wfs služba. jedny z mála wms/wfs služeb provozuje úhul: � wfs úhul – lesńı pokryv čr5 (aktuálně nedostupné) � wms úhul – panchromatické letecké sńımky čr, zdroj dat čúzk6 iii. svoboda 1 http://mapy.mpsv.cz:8080/mapy2/mpsv2.html 2 http://geoportal.cenia.cz/ 3 http://geoportal2.uhul.cz/cgi-bin/oprl.asp?service=wms 4 http://wms.cuzk.cz/wms.asp 5 http://212.158.143.149/cgi-bin/wfs?service=wfs 6 http://geoportal2.uhul.cz/cgi-bin/oprl.asp?service=wms geinformatics fce ctu 2008 93 http://maps.yahoo.com http://maps.live.com http://mapy.mpsv.cz:8080/mapy2/mpsv2.html http://geoportal.cenia.cz/ http://geoportal2.uhul.cz/cgi-bin/oprl.asp?service=wms http://wms.cuzk.cz/wms.asp http://212.158.143.149/cgi-bin/wfs?service=wfs http://geoportal2.uhul.cz/cgi-bin/oprl.asp?service=wms projekt openstreetmap z pohledu geoinformatika možnost vytvářet kopie a volně je distribuovat. pro typický př́ıklad se muśıme poohlédnou do usa, kde je na data vytvořená státńımi organizacemi uplatňována nejčastěji licence public domain, tedy poskytováńı dat zdarma avšak bez záruky: � vektorová data: nima (vmap0, vmap1), us census (tiger) � rastrová data: nasa (dem, landsat 7, srtm) maj́ı celosvětové pokryt́ı v měř́ıtkách do 1:1 000 000 nebo podrobněǰśı pro vybraná územı́ zájmu usa (usa, mexiko, část bývalého sssr). v české republice lze taktéž uvažovat o volně dostupných datových sadách s možnost́ı redistribuce, nicméně u nich neexistuje formálně definovaná licence, byt’ např́ıklad gestor mpsv, nebo řsd volné nakládáńı s daty neformálně předpokládá nebo připoušt́ı, naopak např. heis vúv se stav́ı proti. obecně je postoj organizaćı a jednotlivc̊u k poskytováńı vlastńıch dat třet́ım stranám ve znameńı neochoty a nejistoty v definováńı vlastńı licence. v př́ıpadě souhlasu se jedná právně neformulovaný ústńı nebo do e-mailu verbalizovaný souhlas. a to i v př́ıpadě, kdy vznikaj́ı z veřejných prostředk̊u a jsou ve zdrojovém formátu veřejně dostupné nebo výsledek volnočasové aktivity jedinc̊u.[14][15] na českém územı́ se jedná např́ıklad o datasety: � registry: – uir-adr7 gestora mpsv – uir-zsj8 gestora čsú � vektorová data: – generalizovaná komunikačńı śıt’9 silničńı databanky ostrava správce řsd – vodńı toky10 povod́ı labe. možnost data upravovat, odvozovat jiná a tyto změny veřejně sd́ılet. předpokladem je př́ıstup k zdrojovým dat̊um. existuj́ı licence, které definici splňuj́ı nebo vynucuj́ı, avšak datové sady š́ı̌rené pod touto licenćı v česku nejsou známy vyjma openstreetmap. předpoklady pro vznik open-geodata projektu vznik projektu zaměřeńı na vytvářeńı p̊uvodńıch open-geodat (př́ıpadně openstreetmap a obdobných) je obvykle motivován: � absentuj́ıćımi geodaty, př́ıpadně existuj́ıćı geodata nejsou dostupná veřejně a za dostatečně volných podmı́nek. � lidskou potřebou tvořit a vytvářet hodnoty i mimo činnost finančně honorovanou. 7 http://forms.mpsv.cz/uir/ 8 http://www.czso.cz/csu/rso.nsf/i/prohlizec uir zsj 9 http://www.rsd.cz/rsd/rsd.nsf/0/dffc2ff000fc1fb3c1256dbf002ccee3 10 http://www.pla.cz/planet/ram.aspx?id=21 geinformatics fce ctu 2008 94 http://forms.mpsv.cz/uir/ http://www.czso.cz/csu/rso.nsf/i/prohlizec_uir_zsj http://www.rsd.cz/rsd/rsd.nsf/0/dffc2ff000fc1fb3c1256dbf002ccee3 http://www.pla.cz/planet/ram.aspx?id=21 projekt openstreetmap z pohledu geoinformatika � potřebou sd́ılet své znalosti a výsledky bez restrikćı a poskytovat je komunitě. a předpokládá: � svobodu pohybu � volný čas (po práci, po škole) � levný a dostupný hardware � př́ıstup ke službám (gps, internet) za těchto okolnost́ı může vzniknout komunitńı projekt. openstreetmap (osm) neńı samozřejmě prvńı projekt zaměřený na vytvářeńı/soustředěńı geodat. nejčastěji ho předcházely mapy vytvářené uživateli přij́ımač̊u/navigátor̊u gps garmin. později v západńı evropě vznikaj́ı lokálńı mapy na podobném principu jako osm, účelové mapy např. pro projekt wikipedia, speciálńı nebo lokálńı mapy, nebo vytvořeńım jednotného baĺıku dataset̊u třet́ıch stran freegeodatacz11. osm je ale výjimečný svou životaschopnost́ı, přizp̊usobivost́ı a lidským potenciálem. zabývá se sběrem dat komplexně, nezávisle na ćılovém mapovém výstupu a upotřebeńı, přesto však buduje rozhrańı pro snadný import a export na stávaj́ıćı ćılová zař́ızeńı (proprietárńı gps moduly, gis programy). jasně a zřetelně se hláśı k svobodným licenćım a využ́ıvá jiné legálńı zdroje dat. části datového modelu jsou otevřené uživatel̊um, kteř́ı jej upravuj́ı dle jejich potřeb a možnost́ı. projekt neńı určen jen pro vybraný region, národnost; vytvářet data lze pro celém světě a v libovolném jazyce. ćılem projektu je vytvářet otevřená polohopisná geografická data s širokým okruhem obslužných aplikaćı na principech komunitńı otevřené a sd́ılené práce. figure 1: logo projektu openstreetmap historie openstreetmap projekt osm vzniká v červenci roku 2004 v anglii, kde je registrována doména osm12, stoj́ı za ńım stephen coast, richard fairhurst. výrazné osoby se přidávaj́ı z německa immanuel scholz, frederik ramm, jochen topf a daľśı... � v začátkem roku 2006 zač́ınaj́ı vznikat národńı sekce, obvykle na na úrovni stát̊u, které spolupracuj́ı při tvorbě dat v daném regionu. � v dubnu 2006 vzniká nadace openstreetmap, která má za úkol shromažd’ovat finančńı prostředky na podporu projektu osm. � v ř́ıjnu 2006 se přidávaj́ı prvńı uživatelé z česka a vznikaj́ı zde prvńı data. 11 http://grass.fsv.cvut.cz/wiki/index.php/freegeodatacz 12 http://www.openstreetmap.org/ geinformatics fce ctu 2008 95 http://grass.fsv.cvut.cz/wiki/index.php/freegeodatacz http://www.openstreetmap.org/ projekt openstreetmap z pohledu geoinformatika � v prosinci 2006 je pro osm významné uvolněńı družicových sńımk̊u ikonos prostřednictv́ım serveru maps.yahoo.com13 pro legálńı tvorbu dat. � v listopadu 2007 je v osm čr plně dostupná silničńı śıt’ i. a ii. tř́ıd a dálnic licence v rámci projektu osm je zvykem využ́ıvat licence gnu gpl pro podp̊urný software. často se jedná o java, perl, c, python, ruby aplikace využ́ıvaj́ıćı jiné knihovny svobodného softwaru. tato licence je i v česku podle rozbor̊u některých právńık̊u pod právńı ochranou [4],[5],[6]. pro geodata je už́ıvána licence creative common attribution-sharealike 2.014 (zkráceně cc by-sa 2.0), někteř́ı uživatelé je nav́ıc poskytuj́ı pod licenćı public domain. licence cc bysa 2.0 umožňuje data volně koṕırovat, měnit i prodávat za předpokladu, že jejich libovolná modifikace nebo interpretace bude opět dostupná pod touto licenćı. ve francii dř́ıve formulovaná licence public geodata license15 (český překlad16 pgl), nebyla nakonec komunitou použita. referenčńı rámec a model geodat polohopisná složka projekt osm se zabývá sběrem polohopisných dat, pro něž je využ́ıváno geodetické datum wgs-84, jak je definováno v epsg:4326. výškopis výškopis neńı předmětem sběru dat. pro účely překrývaj́ıćıch se objekt̊u (nejčastěji mosty, tunely, plochy zeleně a vody) lze využ́ıt tématický kĺıč, kterým lze definovat pořad́ı zobrazeńı jednotlivých prvk̊u. uvažuje-li se o využit́ı výškopisných dat jako doplňuj́ıćı informaci k polohopisu v podobě reliéfu nebo vrstevnic, pak jako zdroj je nejčastěji už́ıván srtm3, nebo gtopo30. tématická složka tématická složka je robustńı a nejv́ıce dynamickou složkou komunitńıho wiki [7]. uživatelé navrhuj́ı a schvaluj́ı rozličné vlastnosti, které maj́ı potřebu mapovat, nebo je považuj́ı za d̊uležité. v současné době obsahuj́ı sady značek (tag̊u) pro fyzické objekty[17]: � dopravńı komunikace a zař́ızeńı (silničńı, železničńı, vodńı a letecká doprava) 13 http://maps.yahoo.com 14 http://creativecommons.org/licenses/by-sa/2.0/ 15 http://cemml.carleton.ca:8080/ogug/members/drsampson/pgl/public-geodata-license 16 http://gis.templ.net/pgl/index.html geinformatics fce ctu 2008 96 http://maps.yahoo.com http://creativecommons.org/licenses/by-sa/2.0/ http://cemml.carleton.ca:8080/ogug/members/drsampson/pgl/public-geodata-license http://gis.templ.net/pgl/index.html projekt openstreetmap z pohledu geoinformatika � občanské, pr̊umyslové a vojenské objekty a areály � využit́ı kulturńı, urbanistické krajiny nebo krajinný pokryv, vodstvo � občanská vybavenost � turistické a historické objekty a abstraktńı, rozšǐruj́ıćı, doplňuj́ıćı nebo omezuj́ıćı sady značek (tag̊u): � trasy (hromadná doprava, cyklokoridory) � administrativńı hranice � volnočasové aktivity � okoĺı objekt̊u � př́ıslušenstv́ı a obecné vlastnosti � omezeńı (předevš́ım dopravńı) � názvy � mı́stopis � poznámkový aparát datová primitiva centrálńı databáze [8] shromažd’uje uživateli vytvářená geodata, která jsou tvořena dvěma základńımi prvky, které nesou unikátńı index, časové raźıtko, autora a informaci o své existenci (platnosti). jsou to: � nodes (uzly) – jako jediné nesou samy o sobě př́ımou polohovou informaci. � ways (cesty) – jsou uspořádané orientované posloupnosti nod̊u, kde se každý uzel vyskytuje nejvýše jednou. � areas (plochy) – v př́ıpadě že cesta je uzavřená (prvńı a posledńı uzel je totožný), považuje se za plochu. rozšǐruj́ıćı prvky � tags (značky) – je výčet možných proměnných a jejich hodnot pro popisnou složku geodat � relations (vztahy) – vztahy je náznak budoucnosti v rozš́ı̌rených možnostech seskupováńı a určováńı roĺı primitiv pro zjednodušeńı správy editace a udržováńı objekt̊u. vývoj struktury datových primitiv je ve zkratce následuj́ıćı: [9] 1. nodes, segments (orientované hrany) + tags 2. nodes, segments (orientované hrany), ways(posloupnost hran) + tags 3. současný stav: nodes, ways + tags, relations geinformatics fce ctu 2008 97 projekt openstreetmap z pohledu geoinformatika 4. budoucnost?: nodes, ways + tags, s plným uplatněńım relations, historie změn a metaeditačńı data [12][13] jejich schématické zobrazeńı je na obrázku [figure 2], strukturu zápisu do souboru na schématu [figure 3]. figure 2: primitiva modelu osm: node, way, area figure 3. vzorový xml zápis osm modelu centrálńı databáze osm skrze api poskytuje uživatel̊um posledńı aktuálńı data z požadované geografické oblasti a jejich opravy přij́ımá pouze inkrementálně. veškerá historie z̊ustává tedy archivována, jej́ı využit́ı neńı zat́ım do žádného uživatelského editoru plně implementováno, částečnou lze naj́ıt v online editoru potlatch. jako demonstraci možnost́ı historie je webová aplikace osm history17 vytvářej́ıćı animovaný rastrový obrázek s r̊ustem dat vybrané oblasti v čase. záznamy z gps přij́ımač̊u databáze má také vyhrazenou část pro sběr samotných záznamů z gps přij́ımač̊u (tracklog) ve formátu gpx. zdrojová data tak nez̊ustávaj́ı skryta u p̊uvodńıch uživatel̊u, ale mohou být použita jako podklad pro nová geodata odvozená jiným zp̊usobem, nebo v jiném čase. zdroje dat zdrojem dat pro projekt osm jsou předevš́ım individuálńı záznamy (tracklogy) uživatel̊u z přij́ımač̊u gps. jejich postupný r̊ust doplňuje několik licenčně kompatibilńıch dataset̊u s rozsáhlým pokryt́ım: 17 http://openstreetmap.gryph.de/history/ geinformatics fce ctu 2008 98 http://openstreetmap.gryph.de/history/ projekt openstreetmap z pohledu geoinformatika 1. vektorová mapa vmap0 (autor nima) – celý svět 1:1 000 000 2. družicové sńımky landsat 7 poř́ızené v roce 1999-2001 (autor nasa) – rozlǐseńı 30m 3. družicové sńımky hlavńıch měst stát̊u (poskytovatel yahoo) – v česku pouze praha a okoĺı (rozlǐseńı ∼2m, sńımky družice ikonos z roku 2002) 4. letecké sńımky územı́ čr z let 1998-2001 jejichž p̊uvodcem je čúzk, poskytovatel skrze wms a licence pro osm je úhul. 5. mapy bez autorských práv – volná licence 6. mapy, kde vypršela autorská práva – v česku 70 let od smrti (posledńıho) autora lokálńı datasety jako např. tiger v usa nebo and v holandsku nejsou ve výčtu uvedeny a staraj́ı se o ně obvykle národńı mapovaćı skupiny osm. součásti projektu projekt osm se skládá z několika fyzicky nebo logicky d́ılč́ıch část́ı [10]: � www (amsterdam, nl) – mapserver, který zpř́ıstupňuje databanku rastrových výřez̊u � tile (londýn, uk) – databanka výřez̊u map v rastrovém formátu � tilegen – rendrovaćı server, který z planet.osm vytvář́ı rastrové výřezy map � planet (londýn, uk) – týdenńı export aktuálńı verze geodat z databáze do jednoho xml souboru, jeho velikost je po kompresi bz2 ve stovkách mb (300 mb v červenci 2007) � api (londýn, uk) – api k databázi geodat � db (londýn, uk) – databáze geodat, provozovaná v mysql, která poskytuje data k editaci a přij́ımá modifikovaná nebo nová data, udržuje historii dat � wiki (york, uk) – wiki rozhrańı pro dlouhodobou výměnu informaćı uvnitř projektu, spravovaná všemi uživateli � svn (york, uk) – subversion rozhrańı pro vývoj aplikaćı a skript̊u � dev (amsterdam, nl) – testovaćı rozhrańı vývojář̊u, některý vývoj a testováńı prob́ıhá na soukromých stroj́ıch, jako např. editor josm v německu. � mail (york, uk) – rozhrańı pro e-mailové konference talk, talk-dev, talk-* � blog (york, uk) – blog stručných zpráv z konferenćı a událost́ı okolo osm software api geinformatics fce ctu 2008 99 projekt openstreetmap z pohledu geoinformatika figure 4: diagram komponent osm. převzato z [10]. api [11] je kĺıčovou část́ı osm nebot’ propojuje vněǰśı svět s databáźı geodat. maximálně využ́ıvá existuj́ıćıch standard̊u a jen to nezbytné přidává. základem je śıt’ová vrstva ip, transportńı vrstva tcp a aplikačńı vrstva http. posledńı a jediná podporovaná verze api je 0.5. základńı požadavek klienta je pro http specifikován: "http:" "//" host [ ":" port ] [ abs_path ["?" query ]] dotaz na jeden konkrétńı prvek node, např.: http://api.openstreetmap.org/api/0.5/node/35 uživatelské editory dat jedná se o programy, kterými uživatelé přistupuj́ı k datovému skladu ze svých domáćıch poč́ıtač̊u a s nimiž upravuj́ı geodata osm. úpravy je možno provádět jen z dat umı́stěných v centrálńımu datovému skladu a to při připojeńı: 1. dočasném (např. josm) – uživatel si nejprve stáhne soubor dat, provede úpravy, zkontroluje konflikty a odešle data zpět do datového skladu. 2. stálém (např. potlach) – uživatel si na mapserveru nalezne oblast k editaci, na požadavek je mu umožněn př́ıstup k vektorové podobě a provedené změny lze pr̊uběžně odeśılat, př́ıpadně vracet (i za hranici editaćı aktuálńıho uživatele). mezi editory patř́ı: geinformatics fce ctu 2008 100 projekt openstreetmap z pohledu geoinformatika � josm (viz figure 5) – ”java osm” je plně funkčńı a použitelný editor osm dat. původńım autorem je immanuel scholz. program vlastńı nástroje na vytvářeńı, editaci a modifikaci dat, jejich značkováńı. umı́ řešit editačńı konflikty aktuálńıch editaćı a zobrazuje autory jednotlivých prvk̊u. nyńı je dostupný zkompilovaný ve stabilńı verzi 1.5 a vývojové verzi. umožňuje vytvořená data ukládat na disk, podkládat záznamy cest z gps přij́ımač̊u (tracklogy) ve formátu gpx. je rozšǐritelný pomoćı plugin̊u, mezi nejzaj́ımavěǰśı patř́ı pokročilý wms klient (jehož implementace je umožňuje velmi efektivńı práci s wms v produktech gis jako např. arcgis neznámou), mappaint pro vylepšené zobrazováńı editovaných dat, validator korektńıho značkováńı). � potlatch – flash internetová aplikace pro on-line editaci dat, jej́ıž autorem je richard fairhurst. aplikace je vyv́ıjena předevš́ım pro licenčńı kompatibilitu s yahoo maps použ́ıvaných jako podkladńı vrstva pod vynášená geodata. vyv́ıjena od ledna 2007. � a jiné jako osmeditor, merkaator, osmpedit, java on-line applet – jejich vývoj byl z r̊uzných d̊uvod̊u ukončen nebo jejich vývojáři nedrž́ı bezprostředńı krok s vývojem projektu osm a často jejich posledńı vydáńı neńı kompatibilńı s aktuálńım api. figure 5: java editor josm 1.5 (wms a mappaint plugin) s daty z brna 9. 6. 2007, podloženým sńımky z landsatu. provozováno na gnu/linux ubuntu 7.04 a sun java 1.6. renderery programy, které transformuj́ı data ze souboru xml formátu osm na vektorové obrázky xml formátu svg nebo rastrové obrázky png. geinformatics fce ctu 2008 101 projekt openstreetmap z pohledu geoinformatika � mapnik (viz figure 6) – program napsaný v c++, rozhrańı v pythonu a propojený s jinými knihovnami, určený předevš́ım pro běh na serveru. předpokládá import planet.osm do postgresql databáze. po definováńı výřezu v zeměpisné š́ı̌rce a délce vytvoř́ı databanku obrázk̊u použitelných předevš́ım pro mapserver. výsledek aktualizovaný přibližně jednou týdně je dostupný jako implicitńı zdroj dat na oficiálńım mapserveru. � osmarender (viz figure 7) – individuálńı renderer aktuálńı verze 6. využ́ıvá transformačńıch styl̊u xsl a skrze xml parser vytvář́ı vektorové obrázky map ve formátu svg. je určen pro koncové uživatele (dostupný i jako plugin pro josm). � tiles@home – rozš́ı̌rená a upravená verze osmarenderu o schopnost distribuovatelných výpočt̊u podle vzoru seti@home. uživatel si bud’ vybere oblast, kterou chce udržovat aktuálńı, nebo převezme od serveru požadavek, který je na základě žádosti uživatel̊u nebo změny dat v databázi. klient si stáhne aktuálńı data, vytvoř́ı výstup obrázk̊u pro databanku a zašle jej zpět. výsledek, pr̊uběžně aktualizovaný, je dostupný jako volitelný zdroj dat na oficiálńım mapserveru. figure 6: ukázka zobrazených dat ve webovém prohĺıžeči. dálnice a rychlostńı silnice česka a jeho sousedu z renderu mapnik dostupného na mapserveru www.openstreetmap.org ze dne 2. 4. 2007. fenomén osm openaerialmap postupně jak se projekt osm rozšǐruje mezi uživatele vznikaj́ı sesterské projekty, které př́ımo s osm nesouviśı, ale poskytuj́ı mu podporu, nebo rozšǐruj́ı jeho možnosti. jedńım z takových geinformatics fce ctu 2008 102 projekt openstreetmap z pohledu geoinformatika figure 7: ukázka zobrazených dat ve webovém prohĺıžeči. oblast centra města brna (pouze nekompletńı silničńı śıt’) z renderu osmarender verze 4 dostupného na mapserveru www.openstreetmap.org ze dne 2. 4. 2007. projekt̊u je openaerialmap www.openaerialmap.org, který si klade za ćıl agregovat známé sńımky dpz ve viditelném spektru pod volnou licenćı. základem je sńımek z landsat 7, který je v malých měř́ıtkách překryt podrobněǰśımi sńımky. server komunikuje předevš́ım wms rozhrańım a jako mapserver, který na požadavky uživatel̊u poskytuje lokálńı kopie, nebo je přepośılá na p̊uvodńı servery správc̊u dat. pokud to licence dovoluje, jsou ukládány do vyrovnávaćı paměti. daľśı možnost́ı je vložit př́ımo nasńımané a rektifikované sńımky. někteř́ı uživatelé jdou až tak daleko, že kombinaćı bezpilotńıch leteckých prostředk̊u, gps přij́ımač̊u a fotoaparát̊u, produkuj́ı svá p̊uvodńı data dpz. the state of the map mnoho uživatel̊u osm vystupuje se svými př́ıspěvky o projektu na rozličných konferenćıch. uvnitř komunity však vznikla potřeba potřeba zpětné vazby projektu a osobńıho kontaktu. proto byla 14.-15. července 2007 na univerzitě v manchesteru (uk) uspořádána konference the state of the map18 o teoretických základech, stavu a vývoji osm či sesterských nebo jiných inspirativńıch geoinformačńıch projektech. daľśı ročńık konference byl v limericku (irsko) 12.-13. července 2008. třet́ı ročńı bude 10.-12. července 2009 v holandském amsterodamu. figure 8: logo konference the state of the map 18 http://www.stateofthemap.org/ geinformatics fce ctu 2008 103 http://www.openaerialmap.org/ http://www.stateofthemap.org/ projekt openstreetmap z pohledu geoinformatika mı́stńı setkáńı v zemı́ch západńı evropy, kde se také nacháźı větš́ı počet uživatel̊u, se pořádaj́ı škoĺıćı akce pro nové uživatele, neformálńı setkáńı a mapovaćı akce. úkolem akćı je systematicky pokrýt daty dosud plně nezaznamenanou část urbanizovaného územı́, nebo domapovat odlehlé části měst. nadace osm v anglii vznikla i nadace nezávislá na projektu, která si klade za ćıl źıskávat peńıze na podporu, propagaci projektu osm. jedná se o právnický subjekt, který reprezentuj́ı osoby pod́ılej́ıćı se na vývoji projektu, kteř́ı nesou t́ıhu vývoje. finančńı prostředky jsou určeny pro vývoj, provozu a udržováńı hardware projektu. vlastnosti komunitńıho projektu komunitńı projekty maj́ı své specifické vlastnosti, které vyplývaj́ı z charakteru uživatel̊u a jejich organizace. při takových úvahách nám může pomoci př́ıklad wikipedie, která má deľśı historii a popularitu a přes jiné zaměřeńı obdobné problémy. pohled geoinformatika pro základńı hodnoceńı projekt̊u obvykle uvažujeme měř́ıtka např. finančńı a časové efektivity, nebo účelnosti. v osm neńı možno finančńıho měř́ıtka pro dobrovolnost využ́ıt, čas dosažeńı i obecného ćıle je velmi subjektivně chápán každým uživatelem. jako jeden z ćıl̊u můžeme definovat vytvářeńı polohopisných map velkých měř́ıtek s možnost́ı generalizace pro středńı a malá měř́ıtka s obsahovou náplńı automap, plán̊u měst, cyklomap. daľśı z ćıl̊u je routovaćı mapa pro navigaci. architektura systému tyto dva ćıle umožňuje a jejich naplněńı je jen otázkou počtu dobrovolńık̊u a definováńı požadované úrovně kvality a předevš́ım obsahové náležitosti. také hardwarové řešeńı je pro tiśıce dlouhodobě aktivńıch uživatel̊u udržitelné v provozu. vývoj datového modelu ukazuje jeho živelný r̊ust spolu s touhou uživatel̊u pracovat. snaha zač́ıt projekt zcela od počátku bez robustńıho a odzkoušeného datového modelu zp̊usobuje ještě nyńı komplikace. jedná se předevš́ım o konvertibilnost formátu osm do gis standardńıch formát̊u a následné možnosti využit́ı nástroj̊u geoinformačńıch technologíı (např. gdal). daľśı historickou t́ıž́ı datového modelu je nevhodnost snadné a dlouhodobé údržby dat, nebot’ dosavadńı implementace modelu v editorech vyžaduje př́ıstup k dat̊um na ńızké úrovni, tedy i dostatečné znalosti a zručnosti uživatel̊u. původńı jednoduchost datového modelu umožňovala snadný vývoj obslužných aplikaćı, nyńı však v přechodném stádiu od jednoduchého k pokročilé struktuře modelu je jak správa geoprvk̊u tak obslužných aplikaćı netriviálńı. z pohledu operátora gis má projekt využit́ı jako doplňkového zdroje dat, př́ıpadně základńı orientace, nejsou-li v daném okamžiku dostupná jiná data (např. ověřeńı informace o elementárńı korektnosti georeferencováńı třet́ı stranou). nyńı je v osm třeba uvažovat: geinformatics fce ctu 2008 104 projekt openstreetmap z pohledu geoinformatika � kvalita polohového měřeńı ani obsahové náplně neńı definována. � metadata o mapovaných objektech, prováděných změnách, zdroj́ı informaćı nejsou jednotná ani obecně použ́ıvaná. � konvence práce při vytvářeńı jsou definovány pouze v obecné rovině. � pokryt́ı daty, rozsah zmapovaných územı́ neńı možno specifikovat a nesnadná je i statistická konfrontace úplnosti (např. silnice v osm versus jednotná dopravńı vektorová mapa) � konvertibilnost dat je netriviálńı, komplikovaný systém roĺı neńı dostatečně triviálńı pro vytvořeńı dlouhodobého a univerzálńıho exportu do jiných formát̊u. � geodetické základy využ́ıvá parametry wgs-84, tedy po úspěšné konverzi formátu je už plná kompatibilta se standardy � znalost mı́stńıho významu obsažená v mapě může být cennou informaćı; v optimálńım př́ıpadě může být aktuálńı (změny v klasických mapách trvaj́ı dlouho a stoj́ı nové peńıze) a vyjadřuj́ıćı skutečné využit́ı (nejen prvotńı či p̊uvodńı účel) projekt projekt osm je jako organismus, neexistuje žádná finálńı nebo stabilńı verze. stále se rozšǐruje co do kvality obsahu, tak do kvantity mapovaného územı́. mnoho část́ı projektu je v základńım a neustálém vývoji, jsou sice použitelné a zprovoznitelné, ale vyžaduj́ı však značnou zručnost a zkušenosti. v souvislosti s neustálým r̊ustem a změnami neexistuj́ı často manuály skript̊u či programů. časté změny pravidel pro editaci a zadáváńı dat ponechávaj́ı mnohé návazné části projekt̊u ve zpožděńı a tak např. některé značky (tagy) neńı možno v globálńım mapserveru renderovat. velká variabilita systému je ovlivňovaná poptávkou uživatel̊u a konkrétńım zájmem mapovat. to dává za následek malou jednotnost a koncepčnost značkováńı geoprvk̊u. problémem každého zač́ınaj́ıćıho projektu je ř́ıdké pokryt́ı daty, jehož r̊ust se s časem zpomaluje, př́ıpadně se zaciluje jen na urbanizovaná nebo navštěvovaná mı́sta. každý uživatel pracuj́ıćı jen s výstupy svého gps přij́ımače je přibližně do roka informačně vytěžen, pokud se nestává osm jeho hlavńı końıček a cestováńı ćıleně vyhledává. v létě 2007 p̊usob́ı na územı́ čr asi 10 uživatel̊u/editor̊u dat, na jaře 2008 už asi 20, z čehož polovina má spojitost s prahou, daľśı jsou rozeseti po městech a městysech. pro základńı a postupné mapováńı mı́st ”hic sunt leones” by bylo zapotřeb́ı mnohem v́ıce uživatel̊u. velkou otázkou také z̊ustává aktualizovatelnost dat, či samoopravný mechanismus chyb na straně uživatel̊u. problémem jsou i změny mapovaných objekt̊u a verifikace dat bez větš́ıho počtu zodpovědných uživatel̊u, kteř́ı by měli pod svým dohledem předevš́ım data z územı́, kde se každodenně pohybuj́ı a kde jsou sami znalci mı́stńıho významu. uživatelé hlavńım motorem projektu je evropa a konkrétně angličané a němci, nebot’ zde má projekt největš́ı počet aktivńıch uživatel̊u a vývojář̊u, vysoké pokryt́ı územı́ daty. ti udávaj́ı základńı geinformatics fce ctu 2008 105 projekt openstreetmap z pohledu geoinformatika tón projektu a maj́ı také velkou členskou základnu. komunikace je mimo národńı celky vždy v angličtině. většina uživatel̊u pocháźı profesně mimo obory geovědńı, často se jedná o studenty se zjevným zájmem v informatice. proto se potřebuj́ı naučit elementárńı návyky ve vizuálńı interpretaci, dále syntaxi, sémantiku, systematiku a topologii. i pokud odhlédneme od r̊uzné vyzrálosti uživatel̊u a budeme předpokládat, že maj́ı znalosti stejné úrovně a aktuálńı, přesto produkuj́ı r̊uznou kvalitu dat r̊uznými metodami sběru, editace a osobńıch zvyklost́ı a každodenńı náplńı. uživatelé maj́ı také o projektu rozličné představy z jejichž premis přistupuj́ı k projektu: � až jednou charakterizuje uživatele, který vkládáńı dat vńımá jako dlouhodobý maraton � ihned je charakter uživatele, který vńımá zadáńı a využit́ı dat aktuálně v př́ıtomném čase � kvalita je vlastnost, která určuje, že uživatel vńımá vysokou hodnotu dat (přesnost, pravdivost, ověřenost), jako kĺıčové parametry � cokoliv je vlastnost, která určuje, že uživatel vkládá cokoliv a hled́ı předeš́ım na vysokou penetraci dat všichni uživatelé jsou si rovni a neexistuj́ı žádné formálńı tř́ıdy (správci), které by řešily spory, garantovaly editace a zásahy. určitá privilegia maj́ı hlavńı vývojáři, velká mı́ra demokracie je při schvalováńı nových značek. pro př́ılǐs velká b́ılá mı́sta se uživatelé prozat́ım potkávaj́ı zř́ıdka a spory jsou zat́ım jen drobné na mezinárodńı úrovni, např. řecko, bĺızkovýchodńı oblast, kde občas prosakuj́ı vleklé politické problémy. zaj́ımavým aspektem jsou záškodńıci, kteř́ı by chtěli projekt poškodit. pokud by se na jejich činnost nepřǐslo včas, bylo by (po jejich zablokováńı obt́ıžné) jejich vandalismus obnovit do p̊uvodńıho stavu, nebot’ k historii v hlavńı databázi osm lze přistupovat pouze diskrétně a od př́ıtomnosti do minulosti. nav́ıc pro práci s historíı neńı vyvinut žádný uživatelský program, nebo sada skript̊u. závěr projekt openstreetmap tu existuje několik let a žije svým vlastńım životem mimo dosavadńı struktury zaj́ımaj́ıćı se o mapováńı povrchu předevš́ım urbanizované země. prodělává možná zbytečně dětské nemoci, je na počátku, nedaleko chv́ıle, kdy mapa byla zcela prázdná. zaplněńı b́ılých mı́st je možná na prvńı, v čr nepočetnou, generaci nadšenc̊u př́ılǐs velký úkol. tedy ještě dlouho nebude jako jediný zdroj možné uvažovat o osm. nicméně openstreetmap je životaschopným zdrojem svobodných geodat. veřejnost, která si ho pomalu bere za sv̊uj, je jeho velký potenciál. je jen na geoinformatićıch, zda se budou cht́ıt do něho zapojit a promı́tnout v něm své zkušenosti tak, aby jej mohly později využ́ıvat jako relevantńı nebo paralelńı zdroj geodat. geinformatics fce ctu 2008 106 projekt openstreetmap z pohledu geoinformatika reference 1. rapant petr: družicové polohové systémy. všb-tu ostrava, 2002. 200 str. isbn 80248-0124-8. [cit. 2008-03-30] dostupný na www: online19 2. free software foundation: the free software definition online20. [cit. 2007-06-30]. 3. zeměměřický úřad (2007): výňatek z ceńıku výkon̊u a výrobk̊u zú [online]. [cit. 200706-30]. dostupný na www: online21. 4. aujezdský josef (2005): gnu gpl a použit́ı českého práva [online]. root [cit. 2007-0630]. dostupný na www: online22. 5. otevřel petr (2007): rozsudek ohledně gnu/gpl – přituhuje? [online]. právo v informačńıch technologíıch [cit. 2007-06-30]. dostupný na www: online23. 6. čermák jǐŕı (2001): gnu/gpl – právńı rozbor licence [online]. root [cit. 2007-06-30]. dostupný na www: online24. 7. wiki openstreetmap (2007): map features [online]. [cit. 2007-06-30]. dostupný na www: online25. 8. wiki openstreetmap (2007): database schema [online]. [cit. 2007-06-30]. dostupný na www: online26. 9. coast stephen (2007). this mapping stuff could really take off. in the state of the map 2007. manchester : [s.n.], 2007. dostupný na www: online27. 10. wiki openstreetmap (2007): platform status [online]. [cit. 2007-06-30]. dostupný na www: online28. 11. wiki openstreetmap (2007): protocol [online]. [cit. 2007-06-30]. dostupný na www: online29. 12. ramm frederik, topf jochen (2007): towards a new data model for osm [online]. [cit. 2008-03-30]. dostupný na www: online30. 13. schuyler erle (2007): in response to ”towards a new data model for osm” [online]. [cit. 2008-03-30]. dostupný na www: online31. 19 http://gis.vsb.cz/publikace/knizni publikace/dns gps/dns gps.pdf 20 http://www.gnu.org/philosophy/free-sw.html 21 http://www.cuzk.cz/generujsoubor.ashx?nazev=30-zu cenik 22 http://www.root.cz/clanky/gnu-gpl-a-pouziti-ceskeho-prava/ 23 http://www.pravoit.cz/view.php?nazevclanku=rozsudek-ohledne-gnugpl-prituhuje&cisloclan \ ku=2007050004 24 http://www.root.cz/clanky/gnugpl-pravni-rozbor-licence/ 25 http://wiki.openstreetmap.org/index.php/map features 26 http://wiki.openstreetmap.org/index.php/database schema 27 http://www.slideshare.net/chippy/this-mapping-thing-could-really-take-off/ 28 http://wiki.openstreetmap.org/index.php/platform status 29 http://wiki.openstreetmap.org/index.php/protocol 30 http://www.remote.org/frederik/tmp/towards-a-new-data-model-for-osm.pdf 31 http://freemap.in/ sderle/osm-data-model.html geinformatics fce ctu 2008 107 http://gis.vsb.cz/publikace/knizni_publikace/dns_gps/dns_gps.pdf http://www.gnu.org/philosophy/free-sw.html http://www.cuzk.cz/generujsoubor.ashx?nazev=30-zu_cenik http://www.root.cz/clanky/gnu-gpl-a-pouziti-ceskeho-prava/ http://www.pravoit.cz/view.php?nazevclanku=rozsudek-ohledne-gnugpl-prituhuje\&cisloclanku=2007050004 http://www.root.cz/clanky/gnugpl-pravni-rozbor-licence/ http://wiki.openstreetmap.org/index.php/map_features http://wiki.openstreetmap.org/index.php/database_schema http://www.slideshare.net/chippy/this-mapping-thing-could-really-take-off/ http://wiki.openstreetmap.org/index.php/platform_status http://wiki.openstreetmap.org/index.php/protocol http://www.remote.org/frederik/tmp/towards-a-new-data-model-for-osm.pdf http://freemap.in/~sderle/osm-data-model.html projekt openstreetmap z pohledu geoinformatika 14. openstreetmap, talk-cs: wikiproject czechia/free map2osm32 seznam vybraných dataset̊u pro osm-cs, [cit. 2008-06-30] 15. martin landa: odpověd’ v konferenci33 in freegeocz 27. prosinec 2006. [cit. 2007-06-30] 32 http://wiki.openstreetmap.org/index.php/wikiproject czechia/free map2osm 33 http://mailman.fsv.cvut.cz/pipermail/freegeocz/2006-december/000118.html geinformatics fce ctu 2008 108 http://wiki.openstreetmap.org/index.php/wikiproject_czechia/free_map2osm http://mailman.fsv.cvut.cz/pipermail/freegeocz/2006-december/000118.html direct georeferencing with correction of map projection distortions for active imaging zdeněk švec department of geomatics, czech technical university in prague sveczde1@gmail.com abstract in aerial photogrammetry, the cartesian coordinate system for the description of object space is commonly used. in contrast, many projects have to be processed in specific map projection and vertical datum. in that space, some geometric deformations exist. there are some compensative methods for active and passive sensors. in the case of active sensors, decomposition and the correction of observation vector for each ground point can be used. we obtain height, horizontal distance and horizontal angle in this process. all of these values should be corrected for precise georeferencing. the contribution deals with the derivation of the corrections and gets some theoretical values from the area of the czech republic. esspetially in the case of high flying heights the corrections may gain values even in order of meters. keywords: sensor orientation, national coordinates, mapping, lidar, gnss/imu introduction the task of georeferencing in the field of aerial topographic survey is the determination of geometric relations between captured data and the real world [7]. it includes two consecutive procedures:the determination of the exterior orientation parameters (eop) and the restitution scene from eop and observed data [9]. eop comprise of the spatial position of the sensor projection center and sensor attitude at the time of the observation. eop for passive sensors can be gathered indirectly (by the measurement of image coordinates of ground control points) or directly from records of the on-the-board navigation system. on the contrary, active imaging is dependent (due to the sequential measurement principle andthe motion of the carrier vehicle) on direct methods. referring to the direct georeferencing eop is typically measured by the gnss and inertial measurement unit (imu). gnss provides absolute position with sufficient frequency (at least 1hz) and imu sensor attitude and acceleration. final trajectory is deduced by combining gnss and imu observations [1]. the aerial survey product is mostly provided in “national coordinates”. it means the coordinate system realised by a combination of the national geodetic datum with a national map projection with the associated vertical system. the model for direct georeferencing is designed for cartesian space but national coordinates do not fulfill the condition and therefore cause various geometrical distortions. there are two ways to obtain accuratedata in national coordinates (see fig. 1): restitution of the scene in cartesian (usually earth-fixed) frame and geoinformatics fce ctu 15(1), 2016, doi:10.14311/gi.15.1.3 35 http://orcid.org/0000-0002-6414-6696 http://dx.doi.org/10.14311/gi.15.1.3 http://creativecommons.org/licenses/by/4.0/ z. švec: direct georeferencing with correction of map projection transforming the scene to national coordinates or restitution of the scene in national coordinates with the corrected observation vector. further text refers to active imaging, especially to georeferencing of lidar data. figure 1: accurate georeferencing in national coordinates for lidar. direct georeferencing for active imaging reference frames and eop transformation the eop should be transformed into desired reference frame. overview of reference frames purveys tab 1. detailed description of transformations can be found in [9] and [6]. brief summary contain fig. 2. table 1: overview of the reference frames frame description e earth-centered earth fixed frame realized by international terrestrial frame n eccentric earth fixed frame of national ellipsoid l tangent frame of national ellipsoid p projection frame established by national map projection b body frame realised by acceleromerers of imu model for georeferencing of lidar data according to [10], the coordinates of ground point xg can be (in cartesian frame) expressed as xg = xeo + reorscanxrange = xeo + xdg, (1) where xeo and reo are sensor position vector and the rotation matrices formed by angular eop, respectively, rscan and xrange are scan angle matrix and range vector, respectively. the geoinformatics fce ctu 15(1), 2016 36 z. švec: direct georeferencing with correction of map projection figure 2: schema of eop transformation second term forms the observation vector for direct georeferencing xdg. if the georeferencing is carried out in national coordinates (p-frame), xeo can be given by exact formulas (usually provided by state mapping authority), contrariwise xdg is skewed due to datum scale change and p-frame geometry. we may assume that the correct position of ground point in p-frame x ptrans g is given by georeferencing in e-frame and subsequently rigorous transformation to p-frame. then we gain the correct observation vector in p-frame xṕdg as x ṕ dg = x ptrans g −x p eo. (2) our task is to modify vector xpdg to x ṕ dg. it will be to apply processes published in [10] as a practical approach. it involves some simplifications and refers to conformal map projection. correction of p-frame distortions according to [10], the method consists of four subsequent steps (fig. 3). datum scale distortion. the cartesian e-frame and n-frame have a different scale (if they have not used the same datum). hence the length of the observation vector is different as well and it should be multiplied by scale factor mdatum. decomposition of xpdg to height component zdg (it is always negative), horizontal distance d and horizontal angle ϕ (fig. 4). it will be called “spatial observations”. application of map projection corrections to spatial observations. the earth curvature correction hec is added to zdg, d is transformed to the geodesic distance s and to the projected length d´, skew-to-normal correction ζ and arc-to-chord correction δ geoinformatics fce ctu 15(1), 2016 37 z. švec: direct georeferencing with correction of map projection is added to horizontal angle (normal-section-to-geodesic correction ξ is not taken into account, because it is numerically insignificant) composition of map-projected observations to xṕdg figure 3: sequence of correction of observation vector in p-frame imu works based on newtonian laws and its z-axis is aligned with plumb line. if the vertical datum is not gravity-related, the gravimetric correction is required to be added in observations [5]. decomposition of xpdg is executed by simple equations zdg = zdg,d = √ x2dg + y 2 dg,ϕ = tg xdg ydg · (3) the earth curvature correction height and length component are deduce from geometry in fig. 5 and fig. 6. hec can be expressed as hec = cos α |gp| = cos α(|of| 1 cos α −|og|), (4) g and f can be approximately regarded as locating at a same arc with the radius r + hg, then hec = cos α[(r + hg) 1 cos α − (r + hg)], (5) due to the small value of α we can simplify cos α≈1 − 12α 2 hec≈ α2 2 (r + hg), (6) α = atan d r + hs −zdg ≈ d r + hs −zdg , (7) where hs is the sensor projection centre. by combining (6), (7) we can obtain geoinformatics fce ctu 15(1), 2016 38 z. švec: direct georeferencing with correction of map projection x axis y axis z axis g s d xdg ydg zdg φ figure 4: decomposition of observation vector according to [10] hec = d2(r + hg) 2(r + hs + zdg )2 , (8) hg = hs + zdg, (9) hec = d2 2(r + hs + zdg ) · (10) correction of length component the first step is the conversion of the horizontal distance d to the geodetic distance s. s is always much shorter than the radius of reference sphere, therefore we can approximate the geodesic line by a circular arc. by using (7) we can form a relationship s = rα = ratan d r + hs + zdg , (11) from the definition of the map projection scale factor we obtain the equation for calculation projected length d́ = ms = mratan d r + hs + zdg , (12) where m stands for the map projection length factor. for this purpose it is sufficient to compute m in one point (preferably in the position of sensor projection centre). geoinformatics fce ctu 15(1), 2016 39 z. švec: direct georeferencing with correction of map projection d hg hec d′ hs zdg hg s f p g g′ map projection plane reference sphere α xdg x′dg r g0 figure 5: map projection distortion. g´ is the coresponding projected point of the g according to [10]. correction of horizontal angle since the normal-section-to-geodesic correction distinguishes negligible values, it should be applied to skew-to-normal and arc-to-chord corrections. skew-to-normal corrections express the angle between the directions of the spatial straight line and its corresponding normal section. regardless to used coordinate system it is given by [4] ζ = hg 2ρm e2 sin(2α)cos2ϕg, (13) where ϕg and hg are the geodetic latitude and ellipsoidal height of the ground point respectively, ρm represents the curvature radius in the meridian plane and α is azimuth of observation vector. for practical calculations hg should be replace by hs + zdg. arc-to-chord correction is angle between tangent of projected geodesic and its corresponding chord line. computation is individual for each map projection. experiment the asset of the above-mentioned procedure was proven in [10]. since the real lidar data is distorted by variety of random and systematic errors, the simulated data was used for the experiment. it was an applied method based on (2). it compared coordinates of ground geoinformatics fce ctu 15(1), 2016 40 z. švec: direct georeferencing with correction of map projection hec α α o fp g figure 6: detail for deriving the earth curvature correction. o stands for centre of the reference sphere (rotated 90 degrees clockwise). points restituted in e-frame and transformed to p-frame (assume as the “correct” data) and the coordinates of ground points restituted in p-frame with application of correction. most differences gain sub-millimeter values (tab 2.) table 2: experiment published in [10]. error statistic of simulated data (mdatum= 1,00005; krassovsky ellipsoid, utm projection) relative flight correction mean error [mm] σ [mm] max error [mm] height plane height plane height plane height 500m no 152.9 16.1 56.9 5.6 258.6 25 yes 0.2 0 0.1 0 0.3 0 2000m no 611.7 -41.9 227.7 89 1043.2 -254.9 yes 0.6 0.3 0.3 0.2 1.1 -0.4 8000m no 2446.8 -1871.1 914.9 1224.2 4313.5 -5278.4 yes 2.7 0 1.2 3.6 5.2 -7.2 situation in the czech republic the earth curvature correction is area-independent thus we will discuss the other components in the most frequent national coordinates in the area of the czech republic: s-jtsk (datum of uniform trigonometric cadastral network) / bpv (baltic vertical datum after adjustment) and utm 33(34)n / ellipsoidal height (grs80). regardless to a projection used, we can compute the skew-to-normal correction as [3] ζ́́ = 0, 108 hg 1000 sin (2α) cos2ϕg· (14) as the correction attains a value of 0,07´´ in extreme cases (sněžka mountain, azimuth 45°), we can disrespect it. geoinformatics fce ctu 15(1), 2016 41 z. švec: direct georeferencing with correction of map projection s-jtsk/bpv change of the datum scale between wgs 84 to s-jtsk is -8.750 ppm, i.e if the observation vector is 1 km length it cause its shortening by 8.75 mm. this factor should be taken into account esspetialy in the case of high survey flights. s-jtsk use double conformal conical krovak projection (epsg code 5514). local scale is given by [2] m = αr cosu n cos ϕ , (15) where the first part marks constants α = 1.000597, n radius of the cross section and u spherical latitude. the projection has 2 undistorted parallels. the local scale causes distortion in the range from -10cm/km to 14cm/km.the difference between d and d´ in some selected locations shows tab.3 arc-to-chord correction is given (after removing high-order terms) as [8] δ = (d́g − d́s) [2ks rg rs + kg rs rg ], (16) ki = sin s0 − sin si 6 sin s0 ≈5.314510−9∆ri, (17) ∆ri = ri −r0,r0 = 1298039m,ri = √ x2i + y 2 i ,d́i = atan yi xi , (18) where r means distance of the point from krovak´s projection origin, d́i angle beetween x-axis and the point form projection origin and si cartographic parallel. the undistorted parallel has been chosen as s0 = 78◦30́ . for calculation parameters of the ground point, we can use the rough position obtained by the uncorrected vector xpdg.the magnitude of the correction depends on the length of the horizontal distance, orientation of xpdg and distance from undistorted cartographical parallels. it attains a size of few (in extreme case up to 10) arcseconds. utm/elipsoidal height (grs80) for practical applications, elipsoid grs80 and elipsoid wgs 84 are identical, thus the mdatum is the same. the local scale of each utm strip is given by this simplified formula: m = (1 + λ2 2 cos2ϕ)m0, (19) where λ is longitude (measured from central meridian of the strip), ϕ is latitude and m0 thescale factor of the central meridian. for the reduction of the scale distortion at the boundaries of the strips m0 = 0, 9996 has been chosen. it follows the distortion -40cm/km at the central meridian and +17cm/km on the boundary of the strip. the length distortion of both projections shows fig. 7. after some simplification and removing high-order terms we can compute arc-to-cord correction as δ = −ydg (3xs + xdg ) 6m20r2 · (20) geoinformatics fce ctu 15(1), 2016 42 z. švec: direct georeferencing with correction of map projection figure 7: scale distortion of utm 33n (up), s-jtsk (down). unit of the values is cm/km. geoinformatics fce ctu 15(1), 2016 43 z. švec: direct georeferencing with correction of map projection table 3: difference between d and d´ in some selected locations. location h d s-jtsk utm 33n [m] [m] m d’[m] ∆d [m] m d’[m] ∆d [m] jizera 1166 1000 1.00006 999.877 -0.123 0.99962 999.437 -0.563 zruč nad sázavou 400 1000 0.9999 999.837 -0.163 0.9996 999.537 -0.463 znojmo 300 1000 0.99994 999.893 -0.107 0.9997 999.653 -0.347 conclusion mathematically, the only rigid way of direct georeferencing is to restitute the scene in an e-frame. however, after applying the above mentioned corrections, the residuals of the pframe deformations are negligible in comparison with the noise of gnss/imu measurements. although some formulas are simplified, remaining residuals are below 1 cm even by the relative flying height 8000m. the least impacted have angular corrections, and it can be ignored in the case of low relative flying height (up to 1000 m). one of the main arguments for the choice of frames for georeferencing can be computational costs. the most demanding step comprises transformation of eop to national coordinates (approximately four times higher computation time than the transformation of ground points). however, the gnss/imu record data with a frequency up to several hundred hz and the stateof-the-art lidar systems emit pulses with frequency even 500 khz [10]. therefore the number of ground points is much higher than the number of eop observation and transformation of ground points cause consequently higher computation effort. georeferencing in national coordinates is strictly recommended to verify whether software take into account the p-frame corrections. references [1] antonio angrisano. “gnss/ins integration methods”. phd thesis. universita degli studi di napoli, 2010. url: http : / / www . ucalgary . ca / engo _ webdocs / other / angrisano_phd%20thesis_eng2.pdf. [2] petr buchar. základy matematické kartografie. (in czech). čvut v praze, 2002. [3] miloš cimbálník and leoš mervart. vyšší geodézie 1. (in czech). čvut v praze, 1996. [4] rodney e. deakin. traverse computation on the ellipsoid and on the universaltransverse mercator projection. rmit university, mar. 2010. url: http://www.mygeodesy. id.au/documents/trav_comp_v2.1.pdf. [5] dorota grejner-brzezinska, charles toth, and yudan yi. “on improving navigation accuracy of gps/ins systems”. in: photogrammetric engineering & remote sensing 71.4 (2005), pp. 377–389. url: http://eserv.asprs.org/pers/2005journal/apr/ 2005_04_377-389.pdf. [6] b. hofmann-wellenhof, h. lichtenegger, and j. collins. global positioning system theory and practice. 5th ed. springer, 2001. doi: 10.1007/9783709161999. url: http://eserv.asprs.org/pers/2005journal/apr/2005_04_377-389.pdf. geoinformatics fce ctu 15(1), 2016 44 http://www.ucalgary.ca/engo_webdocs/other/angrisano_phd%20thesis_eng2.pdf http://www.ucalgary.ca/engo_webdocs/other/angrisano_phd%20thesis_eng2.pdf http://www.mygeodesy.id.au/documents/trav_comp_v2.1.pdf http://www.mygeodesy.id.au/documents/trav_comp_v2.1.pdf http://eserv.asprs.org/pers/2005journal/apr/2005_04_377-389.pdf http://eserv.asprs.org/pers/2005journal/apr/2005_04_377-389.pdf http://dx.doi.org/10.1007/978-3-7091-6199-9 http://eserv.asprs.org/pers/2005journal/apr/2005_04_377-389.pdf z. švec: direct georeferencing with correction of map projection [7] klaus legat. “approximate direct georeferencing in national coordinates”. in: isprs journal of photogrammetry and remote sensing 60 (2006), pp. 239–255. doi: 10.1016/ j.isprsjprs.2006.02.004. [8] jan skaloud. “direct georeferencing in aerial photogrammetric mapping”. in: photogrammetric engineering and remote sensing 68.3 (2002), pp. 207–210. url: http: //worldcat.org/issn/00991112. [9] jan skaloud and klaus legat. “theory and reality of direct georeferencing in national coordinates”. in: international journal of photogrammetry and remote sensing 63 (mar. 2008), pp. 272–282. doi: 10.1016/j.isprsjprs.2007.09.002. [10] yongjun zhang and xiang shen. “direct georeferencing of airborne lidar data in national coordinates”. in: isprs journal of photogrammetry and remote sensing 84 (oct. 2013), pp. 43–51. doi: 10.1016/j.isprsjprs.2013.07.003. geoinformatics fce ctu 15(1), 2016 45 http://dx.doi.org/10.1016/j.isprsjprs.2006.02.004 http://dx.doi.org/10.1016/j.isprsjprs.2006.02.004 http://worldcat.org/issn/00991112 http://worldcat.org/issn/00991112 http://dx.doi.org/10.1016/j.isprsjprs.2007.09.002 http://dx.doi.org/10.1016/j.isprsjprs.2013.07.003 a quick method for the texture mapping of meshes acquired by laser scanner francesco gabellone1, ivan ferrari2, francesco giuri2 1consiglio nazionale delle ricerche, istituto per i beni archeologici e monumentali (ibam) via monteroni, 73100 lecce, italy f.gabellone@ibam.cnr.it 2reaserch fellow at ibam via monteroni, 73100 lecce, italy abstract the methodology described in this article was developed in connection with two different projects and entails texture mapping by time-of-flight laser scanner. in order to verify its operational effectiveness and applicability to other contexts, sites with extremely different morphological characteristics were studied. the basic rationale of this simple method derives from the need to obtain different types of mapping – including rgb real colour images, infra-red images, false colour images from georadar scans, etc. – from the same scanned surface. to resolve this problem, we felt that the most appropriate step was to obtain a uvw mapping based on the high resolution real colour images and then use the same coordinates to rapidly map the false colour images as well. thus we fitted a device to the camera to determine its trajectory (similar to a gunsight); when scanned by the laser scanner in the same context as the monument, it makes it possible to know the exact coordinates of the viewpoint. keywords: laser scanning, texture mapping, projection camera texture mapping in laser scanner applications 1. state of the art the progress of laser scanning technology is focused from several time on two different areas: i) the continuous and increasing demand, from the users, of more accurate and faster machines, ii) a tendency to build the same in light alloy, easy to handle and more user-friendly. into a few years it has gone from a standard of 50.000 pt/sec. to 1.000.000 pt/sec., with a performance increase of about twenty times higher than the past. in the same time, these machines have fully entered in the standard equipment of a small and medium companies, a process which has established, after a gestation period of about twenty years, their full usability even by users different from those one historically identified with universities and research centers. to this increasing diffusion of the “tool” laser scanner, encouraged by its relative ease of use, corresponds a transfer of advanced technologies, which have been merged from the research centers into big manufacturers, to improve not just the acquisition process of points in 3d space, but rather all post-processing data. the reference was to the process of data-fusion for alignment and registration of single shots, to the process to make the meshes and their optimization, to the inspection of models, and finally to the texturing. at geoinformatics fce ctu 9, 2012 17 gabellone, f. et al.: a quick method for the texture mapping of meshes . . . present exist a choice of several software solutions, able to manage successfully this critical stage of the acquisition and restitution process of historical and artistic artifacts. in this contest some research groups are included, now focused on the creation of tools that allow the total (or almost) automation of all post-processing processes of point clouds management, capturing data from the visible color of surfaces with color-per-vertex (cpv) information, i.e. with point clouds that directly give spatial information and color information in rgb space, associated with each vertex. this approach have some problems that are not insignificant: the color information associated with the cpv points can’t be reworked or replaced with new color information. to point out it an example could be represented by laser scanning aimed at the documentation for the restoration, in which it is extremely useful to compare “ante and post rem” (before and after the restoration), or in case studies where it is useful to replace a color texture with images related to scientific analysis, such as the ir observation, or the false-color maps from gpr data. contrarywise, the adoption of a classical texture mapping method allows the full flexibility in creating multi-resolution maps – very useful for the development of real-time and gaming applications – and an easy replacement of the same maps with scientific information about various themes, that may arise for the artifact studied. leaving aside the most common mapping systems (cubic, spherical, planar, etc.), the most effective and advanced texture mapping technique available in software for the processing of point clouds, are substantially referable to the point-to-point method [2]. concretely, the operator has to identify the correspondences existing between geometry and texture to map, assigning each map some uvw coordinates that link a point on the surface with a pixel of the texture. this is all the more complex the more the surface is not well characterized in terms of three-dimensional topology, so on a perfectly flat (or smooth) surface will be very difficult to find corresponding points to the pixels of a texture. the research work of point-pixel correspondences is equally challenging in complex artifacts, which generally require a subdivision of the total geometry in small parts to map with different textures ( fig. 01). figure 1: example of multiple patching for the application of many textures (m. delle croci, matera) geoinformatics fce ctu 9, 2012 18 gabellone, f. et al.: a quick method for the texture mapping of meshes . . . 2. objectives the main purpose of this paper is to illustrate a simple texture mapping method that has been applied in two contests, very different in their morphology, in order to test the effective on site operability and the full usability also by users not highly specialized. the case studies related to two monuments with the same name, the rupestrian church of santo spirito in monopoli and the church of santo spirito in lecce. in both cases the mapping of the surfaces has been made using the images of the visible color, taken with a camera and with criteria set out below. the next step of this study will consist in the mapping of models with false-color images, coming from infrared observation (ir) and gpr data analysis, which will be referenced using the same uvw coordinates derived from the application of the simple method described in this paper. 3. case studies the rupestrian church of santo spirito is located in monopoli, a village few km south of bari, italy. the monument is part of a diagnostic project for it restoration sponsored by san domenico’s foundation, for many years active in the territory of the south of italy for the preservation and the promotion of the rupestrian culture. this structure has a peculiarity in the history of architecture: every wall, vault, recess, obtained “per via di levare” (working subtracting material), is an unique and not repeatable piece. this value derives, for obvious reasons, from the very nature of the buildings, that imitate open-air (sub divo) architectures, but they do it in an opposite way, producing sometimes complex planimetric solutions, that could be called “organic”. their development proceeds in a natural and sinuous way, the walls are often curved and not perpendicular, the ceilings and the vaults are on inclined planes, everything in the geometry of the surfaces is unpredictable. the morphological and architectural study of these hypogean spaces, just for the reasons mentioned above, requires a cognitive approach aimed at the acknowledgment of the constructive singularities, and since no architectural element can be traced back, a priori, to planar and regular forms, their survey must necessarily follow a procedure as rigorous and precise as possible, without simplifications. the survey of the rupestrian church of santo spirito in monopoli has been carried out with the indirect active method, using leica scanstation 2 laser scanner. considering the morphological characteristics of the studied monument and the large size of the area to be surveyed, a 3 mm point-to-point step was adopted for the survey, which also represents the most accurate value tested by the instrument. this parameter allow to obtain a high resolution mesh, suitable to the survey purposes, but at the same time it led to an huge complexity of the 3d model, which imposed complex post-processing solutions. the survey involved a total area of about 458 m2, which extends from the outer limits marked by the two roads that surround it, up to the apse on the opposite side of the present entrance. the internal area, measured at the floor, measures 136,12 m2 and it consists in two spaces, stylistically different, adjacent to each other, but forming a unitary space. the survey produced a total of about 40 million polygons, with a mesh resolution of 3 mm. considering the difficulties of managing on desktop computer, the model has been divided into four high resolution parts, and later it has been decimated by a curve filtering parameter of 80◦, to bring it to a 5 million polygons complexity. of course, this filtering allowed to maintain a high level of details in areas with higher curvature (so more morphological complexity), and geoinformatics fce ctu 9, 2012 19 gabellone, f. et al.: a quick method for the texture mapping of meshes . . . figure 2: the rupestrian church of santo spirito, monopoli (bari), general plan to eliminate redundant data in pseudo-planar areas (side walls). the second case study refers to the church of santo spirito in lecce. the original structure of the church is dated 1392. it has been enlarged and reworked several times, first by the architect gian giacomo dell’acaya, master of the hospital, and then rebuilt between 1691 and 1728 on the plans by giuseppe zimbalo, who died during the works, in 1710. the plant of the building, with a single nave, is marked by four arches and six niches, made of local stone and baroque stucco, which enclose the same number of altars. the morphology of the walls is extremely rich in details, as we would expect from a building of this period, compared to an overall structure very simple and regular. the surveys are functional to a diagnostic and research project, planned in the field of the aitech project activities (applied innovation technologies for diagnosis and conservation of built heritage), which is part of the “rete dei laboratori” (laboratory network) initiative, sponsored by the ministry of economic development, the ministry of university and research and the region of puglia, in italy. on this monument, the work team ibam cnr of lecce has set up some researches aimed at the mapping of structural cracks and at ir observation of masonry, activities for which a preliminary three-dimensional restitution of surfaces is required, necessary for the accurate knowledge of architectural features. even for this monument the survey operations were performed using a laser scanner leica scanstation 2, where point clouds have been aligned, filtered and decimated with the same criteria used for the rupestrian church explained above. geoinformatics fce ctu 9, 2012 20 gabellone, f. et al.: a quick method for the texture mapping of meshes . . . figure 3: the rupestrian church of santo spirito, monopoli (bari), axonometric view mapping using camera position informations 4. description of the method the correct mapping of the three-dimensional model acquired through laser scanning is usually one of the most problematic issues in the process of accurate and verisimilar restitution of an artifact. often, some important works by laser scanning are presented as well as mere geometric shapes, with simple shaded views without any applied texture. certainly a simplified presentation of reality, in which the chromatic values of the surfaces are essential elements for a correct reading of the conservation status, of the characteristics of the constituent materials, of superficial cracks and other micro detail characteristics, impossible to reproduce with a time-of-flight laser scanner. for this reason, we tried to refine an old method of mapping, widely known, in the attempt to provide, even the beginner, the necessary tools to produce 3d textured models with good accuracy. in this kind of applications the use of quality images at high resolution will allow to get extremely realistic models, almost indistinguishable from the real one, to be used productively in the documentation for the knowledge of historical buildings, in restoration works, in faithful representations of the current state, in serious games, etc. so the basis of this method is the simple way of mapping according to a camera projection mode, geoinformatics fce ctu 9, 2012 21 gabellone, f. et al.: a quick method for the texture mapping of meshes . . . figure 4: the rupestrian church of santo spirito, monopoli (bari), perspective section in some software called camera mapping, or projection mapping. this technique is generally known and applied in computer graphics especially for making interactive 3d models from two-dimensional images, very useful for the conversion of two-dimensional paintings, frescoes, engravings, in 3d explorable scenes. the usefulness of this method is even more appreciated in mapping operations starting from photos. it is indeed known that cylindrical, spherical or cubical projections, are applicable only in specific cases and can never be used to overlap the photos directly on a scanned object. the planar mapping, in particular, projects the texture on the object according to the plane normal, the plane orientationand so the projection direction is chosen according to the necessities and the kind of the object to texturize, but this type of projection is not coincident with a camera take. in a lot of case studies is wrongly used precisely this technique to map complex objects, by assigning small parts of the scanned object to specific photo-textures, with poor results and obvious signs of stretching in the areas with a different angle projection. this means, strictly speaking, that the planar mapping could be applied just on a planar object with an ortho-rectified texture, according to the rules of the orthogonal projection. each photographic image is rather a perspective view, with a point of view, a target point of the perspective, a visual field and deformations dependent on the quality and nature of the lenses. then in theory, knowing exactly these four parameters and mapping the photographic images according to the perspective rules, that is the same camera mapping method, you can get a mapping with an almost perfect texture-3d model overlap. the examination of the solutions proposed for the definition of each of these parameters is: shooting position (or point of view, xyz position of picture center), target point of the perspective (look at), characteristics of the lenses (focal length and distortion). there are several criteria to establish with precision the camera position in a 3d scene. the geoinformatics fce ctu 9, 2012 22 gabellone, f. et al.: a quick method for the texture mapping of meshes . . . figure 5: church of santo spirito in lecce, multiple texture applied on south wall and example of texture replacement using same uvw coordinates with color correct image first, widely experienced by the information technologies lab of lecce (itlab), consists in the recognition of only the significant points of the scene with digital photogrammetric techniques, and then, after the orientation process, in the recovery of the camera positions of each shot after a patient work of restitution. however, this technique can produce also significant residual errors and lead to further uncertainty degrees in the research of the camera positions, moreover this technique, although it is classified as “low cost”, requires a long processing time, which should be added to that one needed for the post-processing of point clouds. geoinformatics fce ctu 9, 2012 23 gabellone, f. et al.: a quick method for the texture mapping of meshes . . . figure 6: screen shot of 3d model with and without texture applied. figure 7: the rupestrian church of santo spirito, monopoli (bari), wireframe view with texture applied geoinformatics fce ctu 9, 2012 24 gabellone, f. et al.: a quick method for the texture mapping of meshes . . . the method proposed in this paper consists in shooting with laser scanner the camera positions within the scene and in overlapping them with their respective virtual cameras in a 3d graphic software, respecting some important procedures. the first consideration concerns the difficulty of reproducing the exact point of view of the camera, which even though the double pass with the scanner set at the highest resolution (about 2 mm), can’t be identified with certainty. for this purpose a metal viewfinder long about 8 cm has been built, which mounted on the top of the photo camera provides a clear position reference that accurately identifies the point of view, and it has been also useful for a first orientation of the target point of the perspective. this first orientation is to be considered only rough, because a very little rotation fault of only few tenths of a degree on the basis of the virtual camera, produces significant phase shifts of the target point on the 3d surface. therefore, the target point has been determined with certainty considering the center of the photographic image. indeed, turning the camera around its point of view, now identified with certainty, is it possible to fit the center of the frame, marked by the intersection of the diagonals of the image, with the corresponding target point of the perspective, identified on the 3d model surface. this must of course be chosen on site, so it can be recognized during the mapping stages. under low surface characterization, may be suggested the placing of a marker visible by the laser scanner. the last element, extremely important for the success of the method, is the elimination of the lens distortion. “barrel” or “pincushion” distortions are always present, despite the use of aspherical professional lenses usually advertised with no distortions lens. these can be easily removed by the use of specific software, usually supplied by the manufacturers. the 3d model mapping with the camera mapping method, together with the identification of the point of view, the target point of the perspective and the elimination of distortions, creates in our opinion the best method of mapping complex surfaces. of course this method can also be applied in an empirical way, manually moving the camera by making attempts up to find its exact position in space, but the results will be obtained only for approximation and they will not guarantee the best accuracy and the least time taken. 5. conclusions the application of this method has proven an extremely easy to use and the full applicability on very complex and wide surfaces. in the two case studies proposed in this paper was possible to document with accurate precision both the morphological and the geometrical characteristics, both to a hyper-realistic representation of the current state of the surfaces. all that types of surface alteration that are outside from the maximum scanner resolution, have been documented by the mapping of images of about 20 m pixels, taken from a full frame digital reflex camera, canon 5d mark ii. in the example in figure 5, concerning the south wall of the church of santo spirito in lecce, three images were used, properly equalized and corrected, mapped on three separate parts of the model. the method gave excellent results both on planar parts and on round elements. satisfactory results were obtained also in shots with great depth of field and short focal length (16 mm), showing that the elimination of the lens distortions makes fully usable also images with evident “barrel” deformations. a following application of this technique will allow using the coordinates of camera mapping, conveniently converted in uvw, to get textures from thematic maps, useful for providing false-color information about the conservation status of the structures. geoinformatics fce ctu 9, 2012 25 gabellone, f. et al.: a quick method for the texture mapping of meshes . . . references [1] agathos, a., fisher, r. b., colour texture fusion of multiple range images, proc. 4th int. conf. on 3-d digital imaging and modeling, banff, to appear, 2003. [2] callieri, m., cignoni, p., rocchini, c., and scopigno, r., weaver, an automatic texture builder, 3d data processing, visualization and transmission, int. conf., padova, 2002, pp. 562-565. [3] bernardini, f., martin, i., and rushmeier, h., high-quality texture reconstruction from multiple scans, ieee transactions on visualization and computer graphics, 7(4), 2001, pp. 318-332 [4] beraldin, j-a., blais, f., cournoyer, l., picard, m., gamache, d., valzano, v., bandiera, a., and gorgoglione, m., multi-resolution digital 3d imaging system applied to the recording of grotto sites: the case of the grotta dei cervi, in vast, october 30 november 4 2006. [5] gabellone, f., ancient contexts and virtual reality: from reconstructive study to the construction of knowledge models, journal of cultural heritage, journal number 9069, elsevier b.v., 2009. [6] gabellone, f., virtual cerrate: a dvr-based knowledge platform for an archaeological complex of the byzantine age, in caa 2008, computer applications and quantitative methods in archaeology, budapest 2008. [7] gabellone, f., metodologie integrate per lo studio ricostruttivo e la conoscenza dello stato attuale dei beni culturali, in: il dialogo dei saperi, metodologie inte-grate per i beni culturali, a cura di f. d’andria, d. malfitana, n. masini, g. scardozzi, edizioni scientifiche, 2010. geoinformatics fce ctu 9, 2012 26 ___________________________________________________________________________________________________________ geoinformatics ctu fce 291 automated image-based procedures for accurate artifacts 3d modeling and orthoimage generation marc pierrot-deseillignya, livio de lucab, fabio remondinoc a institut géographique national (ign), paris, france email: marc.pierrot-deseilligny@ign.fr b laboratoire map-gamsau (cnrs/mcc), marseille, france email: livio.deluca@gamsau.archi.fr c 3d optical metrology unit, bruno kessler foundation (fbk), trento, italy email: remondino@fbk.eu keywords: photogrammetry, 3d modeling, bundle adjustment, orthoimage, open-source abstract: the accurate 3d documentation of architectures and heritages is getting very common and required in different application contexts. the potentialities of the image-based approach are nowadays very well-known but there is a lack of reliable, precise and flexible solutions, possibly open-source, which could be used for metric and accurate documentation or digital conservation and not only for simple visualization or web-based applications. the article presents a set of photogrammetric tools developed in order to derive accurate 3d point clouds and orthoimages for the digitization of archaeological and architectural objects. the aim is also to distribute free solutions (software, methodologies, guidelines, best practices, etc.) based on 3d surveying and modeling experiences, useful in different application contexts (architecture, excavations, museum collections, heritage documentation, etc.) and according to several representations needs (2d technical documentation, 3d reconstruction, web visualization, etc.). 1. introduction the creation of 3d models of heritage and archaeological objects and sites in their current state requires a powerful methodology able to capture and digitally model the fine geometric and appearance details of such sites. digital recording, documentation and preservation are demanded as our heritages (natural, cultural or mixed) suffer from ongoing attritions and wars, natural disasters, climate changes and human negligence. in particular the built environment and natural heritage have received a lot of attention and benefits from the recent advances of range sensors and imaging devices [1]-[3]. nowadays 3d data are a critical component to permanently record the form of important objects and sites so that, in digital form at least, they might be passed down to future generations. this has generated in the last decade a large number of 3d recording and modeling projects, mainly led by research groups, which have realized very good quality and complete digital models [4]-[8]. indeed remote sensing technologies and methodologies for cultural heritage 3d documentation and modeling [10] allow the generation of very realistic 3d results (in terms of geometric and radiometric accuracy) that can be used for many purposes like historical documentation, digital preservation and conservation, cross-comparisons, monitoring of shape and colors, simulation of aging and deterioration, virtual reality/computer graphics applications, 3d repositories and catalogues, web-based visualization systems, computeraided restoration, multimedia museum exhibitions and so on. but despite all these potential applications and the constant pressure of international heritage organizations, a systematic and targeted use of 3d surveying and modeling in the cultural heritage field is still not yet employed as a default approach. and when a 3d model is generated, it is often subsampled or reduced to low-resolution model for online visualization or to a 2d drawing due to a lack of software or knowledge in handling properly 3d data by non-expert. although digitally recorded and modeled, our heritages require also more international collaborations and information sharing, to make them accessible in all the possible forms and to all the possible users and clients, e.g. via web [11]. nowadays the digital documentation and 3d modeling of cultural heritage should always consist of [2]: – recording and processing of a large amount of 3d (possibly 4d) multi-source, multi-resolution, and multi-content information; – management and conservation of the achieved 3d (4d) models for further applications; – visualization and presentation of the results to distribute the information to other users allowing data retrieval through the internet or advanced online databases; – digital inventories and sharing for education, research, conservation, entertainment, walkthrough, or tourism purposes. ___________________________________________________________________________________________________________ geoinformatics ctu fce 292 the article deals with the first of the aforementioned items. an automated image-based 3d modeling pipeline for the accurate and detailed digitization of heritage artifacts is presented. the developed methodology, composed of opensource photogrammetric tools, is described with examples and accuracy analyses. 1.1 why another open-source, automated, image-based 3d reconstruction methodology? today different image-based open-source approaches are available to automatically retrieve dense or sparse point cloud from a set of unoriented images (e.g. bundler-pmvs, microsoft photosynth, autodesk photofly, arc3d, etc.). they are primarily based on computer vision methods and allow the generation of 3d information even if the images are acquired by non-expert people with no ideas of photogrammetric network and 3d reconstruction. thus the drawback is the general low reliability of the procedure and the lack of accuracy and metrics in the final results, being useful primarily for visualization, image-based rendering or lbs applications. on the other hand the authors are developing a photogrammetric web-based open-source pipeline, based on solid principles and guidelines, in order to derive precise and reliable 3d reconstructions useful for metric purposes in different application context and according to several representation needs. 2. surveying and 3d modeling nowadays there are a great number of sensors and data available for digital recording and mapping of visual cultural heritage [10]. reality-based 3d surveying and modeling is meant as the digital recording and 3d reconstruction of visual and existing scenes using active sensors and range data [6], passive sensors and image data [14], classical surveying (e.g. total stations or gnss), 2d maps [15] or an integration of the aforementioned methods. the choice or integration depends on the required accuracy, object dimensions, location constraints, instrument‟s portability and usability, surface characteristics, working team experience, project budget, and final goal of the survey and so on. optical range sensors like pulsed (time-of-flight), phase-shift and triangulation-based (light sheet or pattern projection) instruments have received in the last years a great attention, also from non-experts, for 3d surveying and modeling purposes. range sensors directly record the 3d geometry of surfaces, producing quantitative 3d digital representations (point clouds or range maps) in a given field of view with a defined measurement uncertainty. range sensors are getting quite common in the surveying community and heritage field, despite their high costs, weight and the usual lack of good texture. there is often a misused of such sensors simply because they deliver immediately 3d point clouds neglecting the huge amount of work to be done in post-processing in order to produce a geometrically detailed and textured 3d polygonal model. on the other hand, passive optical sensors (like digital cameras) provide for image data which require a mathematical formulation to transform the 2d image features into 3d information. at least two images are generally required and 3d data can be derived using perspective or projective geometry formulations [14][17]. image-based modeling techniques, mainly photogrammetry and computer vision, are generally preferred in case of lost objects, simple monuments or architectures with regular geometric shapes, small objects with free-form shape, point-based deformation analyses, low budget terrestrial projects, good experience of the working team and time or location constraints for the data acquisition. 2.2 standards and best practice for 3d modeling issues best practices and guidelines are fundamental for executing a project according to the specifications of the customer or for commissioning a surveying and 3d modeling project. the right understanding of technique performances, advantages and disadvantages ensure the achievement of satisfactory results. many users are approaching the new surveying and 3d modeling methodologies while other not really familiar with them require clear statements and information about an optical 3d measurement system before investing. thus technical standards, like those available for the traditional surveying or cmm field, must be created and adopted, in particular by all vendors of 3d recording instruments. indeed most of the specifications of commercial sensors contain parameters internally defined by the companies. apart from standards, comparative data and best practices are also needed, to show not only advantages but also limitations of systems and software. as clearly stated in [18], best practices help to increase the chances of success of a given project 3. automated image-based 3d reconstruction the developed photogrammetric methodology for scene recording and 3d reconstruction is presented in detail in the next sections. the pipeline consists of automated tie point extraction, bundle adjustment for camera parameters derivation, dense image matching for surface reconstruction and orthoimages generation. the single steps of the 3d reconstruction pipeline have been investigated in different researches with impressive results in terms of automated markerless image orientation [19]-[21] and dense image matching [22]-[25]. ___________________________________________________________________________________________________________ geoinformatics ctu fce 293 3.1 camera calibration protocol and image acquisition the pipeline is primarily focused on terrestrial applications, therefore on the acquisition and processing of terrestrial convergent images of architectural scenes and heritage artifacts. the analysis of the site context relates to the lighting conditions and the presence of obstructions (obstacles, vegetation, moving objects, urban traffic, etc.). the former influences the shooting strategy and the exposure values, the latter is essential in order to perform multiple acquisitions and to select the correct camera focal length. the choice of focal lengths has a direct influence on the number of acquisitions and the resolution of the final point cloud, therefore is always better to know in advance the final geometric resolution needed for the final 3d product. the employed digital camera must be preferably calibrated in advanced following the basic photogrammetric rules in order to compute precise and reliable interior parameters [26]. although the developed algorithms and methodology can perform self-calibration (i.e. on-the-field camera calibration), it is always better to accurately calibrate the camera using a 3d object / scene (e.g. lab testfield or building‟s corner) following the basic photogrammetric rules: a dozen of convergent images at different distances from the object, with orthogonal roll angles and covering the entire image format. each objective and focal length employed in the field must be calibrated. the images acquired for the 3d reconstruction should have an overlap around 80% in order to ensure the automatic detection of tie points for the image orientation. the shooting configuration can be convergent, parallel or divergent. convergent images ensure to acquire possible hidden details. if a detail is seen in at two images, then it can be reconstructed in 3d. it is also important to keep a reasonable base-to-depth (b/d) ratio: too small baselines guarantee more success in the automatic tie point‟s extraction procedure but strongly decrease the accuracy of the final reconstruction. the number of images necessary for the entire survey depends essentially on the dimensions, shape and morphology of the studied scene and the employed focal length (for interiors fish-eye lenses are appropriate). figure 1 illustrates the possible acquisition schemas according to three different contexts: external, internal, façade. figure 1: image network configuration according to different scenarios: large monument (left), indoor environment (center), building‟s façade (right). 3.2 image triangulation for the orientation of a set of terrestrial images, the method relies on the open source apero software [27]. as apero is targeted for a wide range of images and applications, it requires some input parameters to give to the user a fine control on all the initialization and minimization steps of the orientation procedure. apero is constituted of different modules for tie point extraction, initial solution computation, bundle adjustment for relative and absolute orientation. if available, external information like gnss/ins observations of the camera perspective centers, gcps coordinates, known distances and planes can be imported and included in the adjustment. apero can also be used for camera selfcalibration, employing the classical brown‟s parameters or a fish-eye lens camera model. indeed, although strongly suggested to used previously calibrated cameras, non-expert users may not have accurate interior parameters which can therefore be determined on-the-field. the typical output of apero is an xml file for each image with the recovered camera poses. 3.3 surface measurement with automated multi-image matching once the camera poses are estimated, a dense point cloud is extracted using the open-source micmac software [28]. micmac was initially developed to match aerial images and then adapted to convergent terrestrial images. the matching has a multi-scale, multi-resolution, pyramidal approach (figure 2) and derives a dense point cloud using an energy minimization function. the pyramidal approach speeds up the processing time and assures that the matched points extracted in each level are similar. the user selects a subset of “master” images for the correlation procedure. then for each hypothetic 3d points, a patch in the master image is identified, projected in all the neighborhood images and a global similarity is derived. finally an energy minimization approach, similar to [22] is applied to enforce surface regularities and avoid undesirable jumps. ___________________________________________________________________________________________________________ geoinformatics ctu fce 294 figure 2: example of the pyramid approach results for surface reconstruction during the multi-scale matching. 3.4 point cloud generation starting from the derived camera poses and multi-stereo correlation results, depth maps are converted into metric 3d point clouds (figure 3). this conversion is based on a projection in object space of each pixel of the master image according to the image orientation parameters and the associated depth values. for each 3d point a rgb attribute from the master image is assigned. figure 3: the multi-stereo image matching method: the master image (left), the matching result (in term of depth map) in the last pyramidal step (center) and the generated colorized point cloud (right). 3.4 orthoimage generation due to the high density of the produced point clouds, the orthoimage generation is simply based on an orthographic projection of the results. the final image resolution is calculated according to the 3d point cloud density (near to the initial image footprint). several point clouds (related to several master images) are seamless assembled in order to produce a complete orthoimage of surveyed scene (figure 4). figure 4: a typical orthoimage generated by orthographic projection of a dense point cloud. ___________________________________________________________________________________________________________ geoinformatics ctu fce 295 3.5 informatics implementation and gui apero and mic-mac can be used as stand-alone programs in a linux os shell. the algorithms are also available with an end-user gui with dedicated context interfaces: a general interface for the apero-micmac chain, developed at the ign; a specific interface for the entire 3d reconstruction, integrated into nubes forma (maya plug-in) [11], developed at the cnrs map-gamsau laboratory (figure 5a). starting from the automatic processing results, this application allows to: o collect 3d coordinates and distances; o generate dense 3d point clouds on demand (globally or locally); o extract relevant profiles by monoplotting (rectified image / point cloud); o reconstruct 3d architectural elements using interactive modeling procedures; o extract from the oriented images and project onto the 3d data high-quality textures. a web-viewer for image-based 3d navigation and point clouds visualization, developed at the cnrs map-gamsau laboratory (figure 5b) for the visualization of the image-based 3d reconstructions produced with apero/micmac procedures [29]. the viewer allows to jump between the different image points of views, back-projecting the point clouds onto the images. it consist on a simple php-based web site (that user can publish on his/her own server) containing a folder for the 2d content (images), a folder for the 3d content (point clouds, polygons, curves) and a table with the camera parameters. figure 5: guis integrated into nubes forma (maya plug-in) developed at cnrs map-gamsau laboratory for image-based 3d reconstruction (a) and a web-viewer for image-based 3d navigation and point clouds visualization (b). 4. the tapenade project in order to give access to the realized procedure and software to a large number of users requiring metric 3d results (architects, archaeologist, conservators, etc.), the tapenade project (tools and acquisition protocols for enhancing the artifact documentation) [30] was started. this project aims to develop and distribute free solutions (software, methodologies, guidelines, best practices, etc.) based on the developments mentioned in the previous sections and useful in different application contexts (architecture, excavations, museum collections, heritage documentation, etc.) and according to several representation needs (2d technical documentation, 3d reconstruction, web visualization, etc.). the project would like to define acquisition and processing protocols following the large set of executed projects and the long-lasting experience of the authors in 3d modeling applications and architectural documentation. examples, protocols and processing tools are available in the project web site. 5. case studies figure 7 and figure 8 present some examples related to different contexts (architectural exterior, building interiors, architectural element, archaeological excavation, museum object) and the relative 3d point clouds or orthoimages derived with the presented methodology. detailed information on several other case studies is available on the tapenade web site [30]. 5.1 accuracy and performance evaluation the results achieved with the 3d reconstruction pipeline described before were compared with some ground -truth data to check the metric accuracy of the derived 3d data. figure 6 shows a geometric comparison and accuracy evaluation of the proposed methodology. a set of ř images depicts a maya relief (roughly 3×2 m) acquired with a calibrated kodak ___________________________________________________________________________________________________________ geoinformatics ctu fce 296 dsc pro srl/n (4500×3000 px) mounting a 35 mm lens. the ground-truth data were acquired with a leica scanstation 2 with a sampling step of 5 mm. the generated image-based point cloud was compared with the rangebased one delivering a standard deviation of the differences between the two datasets of ca 5 mm. figure 6: examples of the geometric comparison with ground-truth data. original scene (left), derived 3d point cloud (center) and deviation map for a maya bas-relief (std = ca 5 mm). 6. conclusions the article presented an open-source set of tools for accurate and detailed image-based 3d reconstruction and webbased visualization of the metric results. the image processing for 3d reconstruction is fully automated although some interaction is possible for geo-referencing, scaling and to check the quality of the results. the methodology is very flexible and powerful thanks to the photogrammetric algorithms. different type of scenes can be reconstructed for different application contexts (architecture, excavations, museum collections, heritage site, etc.) and several representations can be delivered (2d technical documentation, 3d reconstruction, web visualization, etc.). the purpose of the developed method is to create a community with the aim of progressively enriches the performance and the relevance of the developed solutions by a collaborative process based on user feedbacks. if compared to other similar projects and products, tapenade aims to deliver open-source tools which can be used not only online (web-based) and with highly reliable and precise performances and results. 7. aknowledgments the results showed in this article are based on multiple contributions coming from students, engineers and young researchers. authors want acknowledge in particular isabelle clery (ign) and aymeric godet (ign/map) for their contributions on informatics implementation; nicolas nony (map), alexandre van dongen (map), mauro vincitore (map), nicolas martin-beaumont (ign/map) and francesco nex (fbk trento) for their essential contributions on the protocols and case studies. figure 8: examples of 3d reconstruction and orthoimages generation of complex architectural structures. ___________________________________________________________________________________________________________ geoinformatics ctu fce 297 information object / scene 3d reconstruction architectural exterior dimensions: ca 15x20x20 m number of images: ca 160 processing time: ca 3 hours representation: 3d model/ortho architectural interior dimensions: ca 20x30x15 m number of images: ca 200 processing time: ca 4 hours representation: 3d model architectural elements dimensions: ca 30x150 cm number of images: ca 150 processing time: ca 3 hours representation: 3d model museum object dimensions: ca 15x5x20 cm number of images: ca 50 processing time: ca 1.5 hours representation: 3d model building façade dimensions: ca 60x18 m number of images: ca 40 processing time: ca 2 hours representation: 3d model/ortho archaeological excavation dimensions: ca 15x2 m number of images: ca 180 processing time: ca 4 hours representation: 3d model/ortho figure 7: examples of 3d metric reconstructions achieved with the presented open-source pipeline. ___________________________________________________________________________________________________________ geoinformatics ctu fce 298 8. references [1] ikeuchi, k., miyazaki, d. (eds): digitally archiving cultural heritage, 2007, springer, 503 pages. [2] li, z., chen, j., baltsavias, e. (eds): advances in photogrammetry, remote sensing and spatial information sciences. isprs congress book, 2008, taylor & francis group, london, 527 pages. [3] cowley, d.c. (ed.): remote sensing for archaeological heritage management. eac occasional paper no. 5/occasional publication of the aerial archaeology research group no. 3, 2011, 307 pages. [4] levoy, m., pulli, k., curless, b., rusinkiewicz, s., koller, d., pereira, l., ginzton, m., anderson, s., davis, j., ginsberg, j., shade, j., fulk, d.: the digital michelangelo project: 3d scanning of large statues. proc. siggraph computer graphics, 2000, pp 131–144. [5] bernardini, f., rushmeier, h., martin, i.m., mittleman, j., taubin, g.: building a digital model of michelangelo‟s florentine pieta. ieee computer graphics application, 2002, 22(1): 59–67. [6] gruen, a., remondino, f., zhang, l. photogrammetric reconstruction of the great buddha of bamiyan. the photogrammetric record, 2004, 19(107): 177–199. [7] el-hakim, s., beraldin, j., remondino, f., picard, m., cournoyer, l., baltsavias, e.: using terrestrial laser scanning and digital images for the 3d modelling of the erechteion, acropolis of athens. proc. conference on digital media and its applications in cultural heritage (dmach), 2008, amman, jordan, pp 3-16. [8] guidi, g., remondino, f., russo, m., menna, f., rizzi, a., ercoli, s.: a multi-resolution methodology for the 3d modelling of large and complex archaeological areas. int. journal of architectural computing, 2009, 7(1): 40-55. [9] remondino, f., el-hakim, s., girardi, s., rizzi, a., benedetti, s., gonzo, l.: 3d virtual reconstruction and visualization of complex architectures the 3d-arch project. int. arch. photogrammetry, remote sensing and spatial information sciences, 2009, 38(5/w10). isprs int. workshop 3d-arch 2009, trento, italy. [10] remondino, f.: heritage recording and 3d modeling with photogrammetry and 3d scanning. remote sensing, 2011, 3(6): 1104-1138. [11] de luca, l., bussayarat, c., stefani, c., véron, p., florenzano, m.: a semantic-based platform for the digital analysis of architectural heritage. computers & graphics, 2011, 35(2): 227-241. [12] patias, p.: cultural heritage documentation. in: fryer j, mitchell h, chandler j (eds), application of 3d measurement from images, 2007, 59(3): 225-257, whittles, dunbeath. [13] vosselman, g., maas. h-g. (eds): airborne and terrestrial laser scanning. 2010, crc, boca raton, 318 pp. isbn: 978-1904445-87-6. [14] remondino f., el-hakim, s.: image-based 3d modelling: a review. the photogrammetric record, 2006, 21(115): 269-291. [15] yin, x., wonka, p., razdan, a.: generating 3d building models from architectural drawings. ieee computer graphics and applications, 2009, 29(1): 20-30. [16] gruen, a., huang, t.s. (eds): calibration and orientation of cameras in computer vision. springer, 2001, 239 pages, issn 0720-678x. [17] sturm, p., ramalingam, s., tardif, j.-p., gasparini, s., barreto, j.: camera models and fundamental concepts used in geometric computer vision. foundations and trends in computer graphics and vision, 2011, 6(1-2): 1-183. [18] beraldin, j.a., picard, m., valzano, v., bandiera, a., negro, f.: best practices for the 3d documentation of the grotta dei cervi of porto badisco, italy. proc. is&t/spie electronic imaging, 2011, vol. 7864, pp. 78640j-78640j-15. [19] barazzetti, l., remondino, f., scaioni, m. automated and accurate orientation of complex image sequences. int. archives of photogrammetry, remote sensing and spatial information sciences, 2011, 38(5/w16), on cd-rom. isprs int. workshop 3d-arch 2011, trento, italy. [20] del pizzo, s., troisi, s. automatic orientation of image sequences in cultural heritage. int. archives of photogrammetry, remote sensing and spatial information sciences, 2011, 38(5/w16), on cd-rom. isprs int. workshop 3d-arch 2011, trento, italy. [21] roncella, r., re, c., forlani, g. performance evaluation of a structure and motion strategy in architecture and cultural heritage. int. archives of photogrammetry, remote sensing and spatial information sciences, 2011, 38(5/w16), on cd-rom. isprs int. workshop 3d-arch 2011, trento, italy. [22] hirschmuller, h.: stereo processing by semi-global matching and mutual information. ieee transactions on pattern analysis and machine intelligence, 2008, 30(2): 328–341. [23] remondino, f., el-hakim, s., gruen, a., zhang, l. turning images into 3d models development and performance analysis of image matching for detailed surface reconstruction of heritage objects. ieee signal processing magazine, 2008, 25(4): 55-65 [24] vu, h.h., keriven, r., labatut, p., pons, j.-p. towards high-resolution large-scale multi-view stereo. proc. computer vision & pattern recognition, 2009, kyoto, japan [25] furukawa, y., ponce, j. accurate, dense and robust multiview stereopsis. ieee transactions on pattern analysis and machine intelligence, 2010, 32(8): 1362-1376. ___________________________________________________________________________________________________________ geoinformatics ctu fce 299 [26] remondino, f., fraser, c.: digital camera calibration methods: considerations and comparisons. int. archives of photogrammetry, remote sensing and spatial information sciences, 2006, 36(5): 266-272. isprs commission v symposium, dresden, germany [27] pierrot-deseilligny, m., clery, i.: apero, an open source bundle adjustment software for automatic calibration and orientation of set of images. int. archives of photogrammetry, remote sensing and spatial information sciences, 2011, 38(5/w16), on cd-rom. isprs int. workshop 3d-arch 2011, trento, italy, 2-4 march 2011. [28] pierrot-deseilligny m., paparoditis n.: a multiresolution and optimization-based image matching approach: an application to surface reconstruction from spot5-hrs stereo imagery. int. archives of photogrammetry, remote sensing and spatial information sciences, 2006, 36(1/w41). isprs workshop on topographic mapping from space, ankara, turkey. [29] bussarayat c., de luca l., véron p., florenzano m. a real-time 3d interactive interface for a image spatial retrieval. proc. of the iadis international conference on computer graphics and visualization. [30] tapenade project (tools and acquisition protocols for enhancing the artifact documentation): http://www.tapenade.gamsau.archi.fr http://www.tapenade.gamsau.archi.fr/ ___________________________________________________________________________________________________________ geoinformatics ctu fce 275 metric accuracy evaluation of dense matching algorithms in archeological applications c. re1, s. robson2, r. roncella3, m. hess2 1 cisas, university of padova, 35129 padova (pd), italy 2 department for civil, environmental and geomatic engineering, university college london, wc1e 6bt london, united kingdom 3 dicatea, university of parma, 43124 parma (pr), italy keywords: cultural heritage, photogrammetry, laser scanning, scanner, comparison, accuracy abstract: in the cultural heritage field the recording and documentation of small and medium size objects with very detailed digital surface models (dsm) is readily possible by through the use of high resolution and high precision triangulation laser scanners. 3d surface recording of archaeological objects can be easily achieved in museums; however, this type of record can be quite expensive. in many cases photogrammetry can provide a viable alternative for the generation of dsms. the photogrammetric procedure has some benefits with respect to laser survey. the research described in this paper sets out to verify the reconstruction accuracy of dsms of some archaeological artifacts obtained by photogrammetric survey. the experimentation has been carried out on some objects preserved in the petrie museum of egyptian archaeology at university college london (ucl). dsms produced by two photogrammetric software packages are compared with the digital 3d model obtained by a state of the art triangulation color laser scanner. intercomparison between the generated dsm has allowed an evaluation of metric accuracy of the photogrammetric approach applied to archaeological documentation and of precision performances of the two software packages. 1. introduction background the conservation of cultural heritage by the reconstruction of three-dimensional models continues to be an area for constant development and evolution by a wide variety of contributing scientific communities. driving forces behind this great interest include: documentation in case of destruction or damage; creation of virtual museums and tourism; teaching and learning; conservation and restoration. whilst ultimately fields will merge, currently the most common 3d surface recording techniques can be divided into two main categories: photogrammetry and laser scanning.whilst both techniques have their advantages, critical evaluation is often biased by the capabilities of the deployed systems and the individuals using them. in particular the use of image based techniques requires rigorous photogrammetric bundle adjustment, supported by metric survey if both precision and accuracy are to be maintained [1]. two key issues need to be considered when applying either technique. the first is in optimizing capture to minimize occlusion which give rise to holes in the data. in the case of a triangulation laser scanner both sensor and laser must “see” the same surface; hence the base separation and spot separation will dictate the variation in surface form that can be captured. in the photogrammetric case occlusions between images will similarly give rise to "holes" in the data and variable image quality will contribute to geometrical error and "noise" in the reconstructed surfaces. optimization of the photogrammetric process to ensure good results commences with careful in the preliminary stages (camera ca libration and image orientation) combined with object selection to avoid failure cases in objects where surface detail and contrasting texture are absent. secondly, surface finish is a critical issue for all optical recording technique. in both laser scanning and photogrammetry light must be reflected back from the surface to be recorded into a camera. in the case of scanning the geometry and intensity of the light are defined by the scanning configuration and, in the better systems, feedback based on the amount of light received at the detector. the photogrammetric technique will more typically deploy a photographic lighting setup, or even ambient light in the recording process. in either case specular reflections, particularly from metallic or dark shiny stone finishes will cause practical challenges. the study described in this paper was carried out as part of activities for the creation of 3d models on archaeological finds of small and medium size in order to investigate the performance of different photogrammetric software in comparison to a state of the art laser scanning system. results have been drawn from 3d surveys of a number of archaeological finds preserved in the ucl petrie museum of egyptian archaeology. the objects examined were: a lid of a stone canopic jar (ca. 20 cm x 20 cm), a funerary cone (ca. 14 cm x 14 cm) and a cartonnage mask (45 cm x 30 cm x 9cm). the first two objects were imaged with both laser scanning systems and photogrammetry, whilst the cartonnage was too fragile to manipulate and was imaged in-situ with photogrammetry. a photogrammetric inter-software comparison was made between dsms (digital surface model) computed with two commercial systems and one research orientated system. the commercial systems included bae socetset which is optimized to produce mapping products, whilst the academic system was dense matcher developed by parma university. for the two smaller objects, resulting dsm were compared to the dense point ___________________________________________________________________________________________________________ geoinformatics ctu fce 276 cloud generated by the high precision (ca. 20 µm depth uncertainty) arius3d laser scanner (housed at ucl) to evaluate their metric accuracy. 2. object surface recording 2.1 3d colour laser scanning the funerary cone was 3d colour laser scanned using a recently upgraded arius3d „foundation model 150‟ colour scanner, which is unique in europe, to create detailed object „fingerprints‟ of a range of artefact types. the scanner, held in partnership between arius3d and ucl, is able to deliver 3d coloured point data at a sampling interval of 0.1mm (~250 dots per inch) with a range accuracy of better than of 0.020mm [2].the scan head delivers xyz, rgb and surface normal data along a 50mm profile which is driven across the object surface by the controlled motion of a coordinate measurement machine (cmm). the cmm scanning volume allows objects of up to 90cm x 50cm to be scanned. the scanner collects 3d geometry information through the use of a laser triangulation system, whilst colour is collected by analysis of the reflected light from red, green and blue lasers at 638 nm, 532 nm, and 473 nm. these capabilities confer the project with the ability to produce state of the art 3d surface models which have a level of geometric and colour standardization that are well suited to museum recording. laser scanning, as with all optical techniques, is dependent on the interaction of the surface to be recorded with light. the highly directional nature of the laser illumination and narrow acceptance angle of the detector in the triangulation geometry favour surfaces which reflect incident light in a diffuse or lambertian way. specular surfaces, particularly polished metals, present real difficulties that either saturate the senor or result in the sensor receiving insufficient light from the surface to make a record. such optical properties are not only dependant on the surface and imaging geometry but also on the wavelength of the light used to make the recording. state of the art systems such as those from arius3d, are highly capable of recording the geometry of many different surfaces, but require a combination of artistry and science for the successful reconstruction of colour captured from disparate views. most scanning systems have a workflow that is designed to convert point cloud data into triangulation based models for subsequent visualisation and dissemination. however work on the ucl led e-curator project has demonstrated that heritage professional / digital object interaction can be efficiently delivered from coloured point cloud data where geometry, colour and point based surface normals are combined with splat based point rendering [3]. in our case this is achieved through the use of the arius3d pointstream software [4], but there are a growing number of points rendering alternatives. 3. photogrammetric survey photogrammetric data capture follows a common methodology and is distinct in philosophy from most computer vision approaches in that the captured imagery and content are designed from a metric standpoint. first a geometry or network design is performed to ensure that the number and location of the images to be used are appropriate to produce accurate results. several issues influence the quality of the final result. key are: the availability of known control points and/or scale in the field of view of the imaging systems and choice of, image resolution and image scale to ensure that fine surface detail can be recorded for subsequent image matching and visualisation. if well designed, it is the ability to record fine detail which can allow photogrammetry to easily surpass the data that can be captured with the majority of laser scanning systems. this is because laser scanning systems are limited in spatial resolution by both their projected laser dot diameter which are typically of the order of 50 to 250 microns and the capability of the motion system directing the scanner to capture in a spatially regular manner. 3.1 orientation software description to orient the image blocks and optionally carry out camera calibration many different software tools are available from both commercial and research sources. for the work carried out in this paper the following tools were based on their availability in padova, parma and london. photomodeler is a digital close-range photogrammetry program that allows 3-d models to be constructed from digital images. the system is used for camera calibration and for the digital orientation and restitution of photographs during processing; photomodeler (pm) computes the orientation of each image calculating the location and angle of the camera for each photo. pm was used principally because it is a well established solution, however it should be noted that it also allows metric constraints in the form of inter-distances between features to be included in the calculation of the bundle adjustment. in our recording process circular targets were located around the object to be measured and their inter-distances measured with digital calliper to accurately scale the resultant model and strengthen the photogrammetric adjustment. vision measurement system (vms) has been established for some 15 years as a tool for engineering photogrammetry having been compared against many other metrology systems in both industrial and biological applications [5]. the software supports photogrammetric simulation, self calibrating bundle adjustment with both basic and extended parameter sets, fully automatic target image measurement and the ability to produce geometrically corrected images so that the subsequent matching process does ___________________________________________________________________________________________________________ geoinformatics ctu fce 277 not need to consider the geometric nuances of the best fitting camera calibration parameter sets. the software also supports the use of inter-distances and directions between targets and features as part of a rigorous bundle adjustment process. eyedea has been developed in the last year at dicatea. unlike the previous software, eyedea is capable of automatically orientating a generic close-range image sequence using structure from motion (sfm) algorithms. also, a graphical user interface (gui) allows the user to measure image points manually or semi-automatically (i.e. the user selects points on one image and the software automatically finds through a matching procedurethe homologous points on the other images of the block); the user can perform a bundle adjustment of the whole image block or process just a part; the user can also define which images make a sequence and then process them using our sfm code [6] that implements the surf operator and surf feature descriptors. 3.2 matching software description to generate the dsms using matching algorithms both commercial and in-house matching systems were applied: dense matcher is a software package developed by parma university based on the classical area-based stereo algorithm. the program is able to detect homologies between one reference (master) images and their correspondent (slave) images. the detection is performed using different matching windows for the two images. the best results are obtained with the least squares matching (lsm) method. in the case of multiple images, however, the multi-photo geometrically constrained matching (mgcm) method introduced by [7] exploits the redundancy of the information present in more than two images. socetset by bae systems is a digital mapping software application. the software works with digital airborne, satellite and terrestrial imagery data and includes multi-sensor triangulation, several matching algorithms and the capability of generating a range of image based products [8]. for this research the software has been applied to generate a surface model from the photogrammetric images using its next-generation automatic terrain extraction (ngate) module. ngate is an advanced tool for automatic dsm generation utilizing combined area and edge matching. the software is capable of matching each pixel in both forward and back matching processes and can deploy break lines to delineate major surface discontinuities. 3.3 the three case studies all objects recorded in the following three case studies are from the ucl petrie museum of egyptian archaeology [9]. all object handling during the imaging campaigns was both approved and subsequently carried out by approved petrie museum specialists. 3.3.1 canopic jar lid the canopic jar lid (accession number uc30116) is an ancient egyptian object dating back to the new kingdom period (ca 1200 bc). this small stone object (figure 1) is ca. 20cm x 20cm x 20cm. for photogrammetric recording a nikon d700 (4357x2899 pixel) digital camera with calibrated 38 mm lens has been used. the sequence of 23 images with convergent camera attitudes follows a spiral path moving on an imaginary spherical surface centered on the object. the camera calibration operations were carried out by adopting standard procedures provided within the photomodeler 5 software. the image orientations were performed with the photomodeler bundle adjustment by importing homologous points determined with eyedea. the scale of the photogrammetric model was been defined by caliber measurements of the distance between the targets applied to a base board on which the object was placed. the measures used for scaling were placed as a constraint during the calculation of bundle-adjustment. at the end of the block orientation process the reconstruction of the dsms was carried out (figure 2). orientation data were imported into dense matcher together with the estimated ground points. to optimize the correlation process selected image pair, meeting requirements in relation to the ratio of baseline/distance and angles of convergence of optical axes, have been selected as data-input in the matching process. in particular, features that appear only on very narrow base photographs have much lower accuracy than features on photographs with greater separation. 3.3.2 funerary cone the funerary cone (accession number uc37585, figure 3) is a small egyptian object dating back to the new kingdom (ca 1200 bc). it is approximately 10 cm in diameter, nearly circular and is moulded from clay and shows a relatively uniform texture. the survey of this object conveniently utilised a systematic set of images acquired under a 1030mm hemispherical dome in use at ucl to study ptm (polynomial texture mapping) [10] .the dome consists of a central camera mounting and 64 individual flash lights arranged in five tiers inside the dome. placing the object in this hemisphere allows sequential image capture with highly controlled angular illumination. in this case images were taken simultaneously deploying the lights on each of the five tiers. a single nikon d200 digital camera was mounted at the dome “north pole” above the object which was placed on the horizontal baseboard (figure 5). to facilitate stereo ___________________________________________________________________________________________________________ geoinformatics ctu fce 278 imaging, the object was translated systematically from left to right to create baselines from 4 cm up to a maximum of 12 cm. the object had been placed on a base board with coded targets. the scale of the photogrammetric model has been defined by calliper measurements made between the targets on the base board used as constraints within the bundle adjustment. for this metric comparison test, the image pair chosen subtended the maximum baseline (12 cm) illuminated by the highest tier of lights. figure 1: archival photograph of the canopic lid figure 2: example dsm figure3:archival photograph of the funerary cone figure 4: dsm of funerary cone figure 5: schematic drawing of ptm dome. image courtesy of l. macdonald 3.3.3 cartonnage mask figure 6: archival photograph of figure 7: detail of the resulting dms from cartonnage mask socetset, colour per vertex mesh, of the cartonnage head the third case study records a mask (accession number uc45849, figure 6) from the ptolemaic period (305 50 bc). the medium sized object (45 cm x 30 cm x 9cm) is made from painted plastered waste-paper, papyrus cartonnage. such head covers were used to lay upon the chest of mummified and wrapped bodies. in this case it was not possible to handle or lift the fragile object, so only a photogrammetric dsm was obtained. despite being larger, the cartonnage was imaged using the same workflow as the smaller objects, the only difference being that a larger calibrated board was ___________________________________________________________________________________________________________ geoinformatics ctu fce 279 required. in this case a calibrated nikon d700 with calibrated 35mm lens was used to produce images from a systematic range of viewpoints and stable photographic lighting conditions with two indirect slave flashes. images were orientated in vms and subsequently matched with socetset. the final model was output in both point cloud and tin (triangulated irregular network) formats (figure 7). the 3d colour model is currently part of an interactive museum exhibit at the british library “growing knowledge” exhibit [11]. 4. analysis and comparison 4.1 technical and practical considerations whilst both laser scanning and photogrammetric solutions are capable of non contact production of 3d colour models, which can be used for both scientific analysis and audience visualisation, the methods used to generate the models give rise to several key differences. digital close range photogrammetry is a robust and established non-contact optical method for the documentation of museum artefacts. the equipment consisting of a digital slr camera (nikon d700) and lighting equipment is easily transportable to museum and fragile objects. it is capable of delivering high-resolution colour images ideal for the documentation of current condition and damages on the surface of the artefact enabling the visualisation of details of the order of 50 µm. the use of an imaging dome can enhance imaging consistency and provide illumination control capable of supporting a range of rti reconstruction techniques [12]; however the dimensions of the dome, or illumination device and requirement for blackout are restrictive when compared to a simple photographic imaging configuration. differences between the on-site time necessary to capture data and off-line processing are significant, with the laser scanner taking longer than the imaging techniques. set against this is the immediacy of the 3d model and the ability to check its completeness and quality in-situ. such checking avoids the need to re-visit, but we note the continuing improvement in photogrammetric automation that will steadily erode this advantage. visual inspection is a very important aspect for both museum professionals and audience engagement. a key advantage of image based techniques are higher resolution and colour fidelity resulting in output that is more convincing to conservators and curators even though the underlying geometry might not be as detailed as a 3d scan sampled from laser scanning techniques. when scientifically captured, such images enable detailed inspection of damage and condition and have the necessary resolution to mimic the use of a low magnification hand lens. within these tests, the cartonnage (section 3.3.3) provided an example of an extremely fragile object that could not be removed from its museum environment or even its supporting structure. in this case the imaging system needed to go to the object and be deployed in a manner that considered other users of the museum space. 4.2 dsm comparisons to make the comparison between datasets (photogrammetry / laser and photogrammetry / photogrammetry) 3d modelling software was used. after the registration between the surfaces to be compared, the most significant statistical values of the distances between the two surfaces were calculated: mean, standard deviation, rms (root mean square) error (table 1). to make the comparisons more readable, a colour map was also displayed. 4.2.1 canopic jar lid mean (mm) stddev (mm) rms error (mm) 0.059 0.209 0.217 figure 8: dms deviation map for the canopic jar lid table 1 : comparison table for canopic jar lid the 8 pairs of images with the best geometric configuration and producing the most complete dsm were used: the dsms were subsequently merged to produce a global triangulated mesh that was later re-scaled according to the reference length measured on the original object. the dsm obtained from the fusion of the different meshes has a certain degree of noise (~0.2mm) and the presence of some incomplete areas due to occlusions, failure of matching algorithm due to low image textures on parts of the lid and, oblique viewing angles to the object surface. most obvious http://pressandpolicy.bl.uk/press-releases/growing-knowledge---exhibition-enters-a-second-phase-4aa.aspx ___________________________________________________________________________________________________________ geoinformatics ctu fce 280 is that the connection between the face and the neck of the canopic jar lid is unsatisfactory. the output dsm was compared with the arius3d generated dsm as a reference. since the two models were not generated in the same reference system an icp alignment was required. figure 8 shows the final dsm which have then been overlaid with a colour error map denoting the discrepancy from the laser scan reference data. taking the scan model as being correct, accuracy has been determined by projecting the mesh of the complete photogrammetric model onto the scanned model. both the standard deviation and the rms error are of the order of 0.2 mm. 4.2.2 funerary cone the sequence in figure 13 below shows three different types of comparisons: between the photogrammetric dsm generated by socetset and the reference scanning model (figure 9.a), the dense matcher‟s dsm compared to socetset model (figure 9.b) and the dense matcher model with 3d laser scanner dsm (figure 9.c). each comparison is overlaid with the colour error map to show discrepancies between the dsms obtained by the different techniques. in this case study all models were first aligned using icp algorithms. as can be seen the discrepancies between the two photogrammetric models appears random with deviations of the order of 0.1mm which are attributable to differences between the matching algorithms (figure 9b). however comparison between both photogrammetric models and the laser reference show clear systematic departures. given the parallel axes of the stereo images used to make the models, it is possible that uncorrected lens distortion has given rise to these differences which represent an underlying curvature in both image based model [13]. this result highlights the need to produce check data to ensure the correctness of 3d reconstruction since such an observed trend could be interpreted as a structural change in the object by a museum professional. dsm-ss/dsm-laser dsm-dm/dsm-ss dsm-dm/dsm-laser 0.154 0.080 0.158 table 2 : standard deviation of discrepancies after icp (mm) figure 9: deviation maps for the funerary cone: (a) socetset to laserscan, (b) densematcher to socetset, (c) densematcher to laserscan. comparison across a small area in the centre of the dsms, following local alignment, effectively removes the influence of the model curvature to highlights the degree of local discrepancy which are all of the order of 0.08mm (table 3). dsm-ss/dsm-laser dsm-dm/dsm-ss dsm-dm/dsm-laser 0.078 0.072 0.081 table 3: values of standard deviation (mm) for the comparison of small area dms in the centre of the funerary cone ___________________________________________________________________________________________________________ geoinformatics ctu fce 281 4.2.3 cartonnage mask in this case study the dsms generated with single image pairs with socetset have been compared with the corresponding dsms produced by dense matcher. figure 10 shows the discrepancies between the two photogrammetric techniques working on the same pair of images over the relatively flat lower portion of the mask under the same survey conditions. both image matching software packages have produced encouraging results, demonstrating agreement of the order of 0.1 mm. figure 11 is of a second pair of images which include the more 3d upper portion of the mask. here agreement is lower (~0.25mm) and clearly shows significant systematic deformations. these differences are attributable to the greater complexity of the surface and to the presence of occlusion and shadows around the nose and chin that make the image matching process challenging. 5. conclusion comparison between photogrammetric dsms and those made with the 3d laser scanner demonstrate an overall agreement of the order of 0.2mm. if systematic error in the photogrammetric data can be minimised, for example through the use of convergent axes, then internal precision estimates of the order of 0.02mm should be achievable, even if this value is quite optimistic and the real value may be around 0.05 mm. such data are entirely appropriate for th e documentation of small and medium-sized archaeological finds. most notable however is the ability of the digital imaging techniques to directly deliver compelling high resolution imagery at low cost which is readily accepted by museum professionals. results highlight the importance of the dominant role played by photogrammetric orientation in the workflow if accurate dsms are to be produced. in particular comparison against the arius3d dsm showed that, if control points are not properly designed and acquired, significant systematic but undetectable deformations can occur. 6. acknowledgements the authors would like to acknowledge the support of the ucl petrie museum and prof. lindsay macdonald for support in imaging with the ptm dome. 7. references [1] remondino, f. et al., 2008. turning images into 3-d models. developments and performanceanalysisof image matching for detailed surface reconstruction of heritage objects. signal processingmagazine, ieee, 25(4), pp.55-65. [2] arius3d .2011. arius3d. available at: http://www.arius3d.com/ [accessed june 3, 2011]. [3] robson, s. et al., 2008. traceable storage and transmission of 3d colour scan data sets. in m. ioannides & a. addison, eds. proceedings of the 14th international conference on virtual systems and multimedia. cipa, dedicated to digital heritage. limassol, cyprus: archaeoloingua/ budapest, pp. 93-99. [4] arius3d 2011. pointstream software. http://www.pointstream.net/ [accessed june 3, 2011]. [5] gruen, a. & baltsavias, e., 1988. geometrically constrained multiphoto matching. photogrammetric engineering and remote sensing, 54(5), p.633 figure 10: dms deviation map for the cartonnage mask, lower part of the mask figure 11: dms deviation map for the cartonnage mask, right side of the head ___________________________________________________________________________________________________________ geoinformatics ctu fce 282 [6] bae systems, 2011. bae system digital mapping software. available at: http://www.socetgxp.com/content/products/socet-set. [7] ucl museums & collections, 2011. the petrie museum of egyptian archaeology. available at: http://www.ucl.ac.uk/museums/petrie [accessed june 3, 2011]. [8] roncella, r., re, c. & forlani, g., 2011. comparison of two structure and motion strategies. in proc. 4th isprs international workshop 3d-arch: 3d virtual reconstruction and visualization of complex architectures. trento, italy: isprs. available at: http://www.isprs.org/proceedings/xxxviii/5-w16/pdf/roncella_re_forlani.pdf. [9] shortis, m. & robson, s. 2001. vision measurement system vms. available at: http://www.geomsoft.com/vms/ [accessed june 3, 2011]. [10] macdonald, l. & robson, s. 2010. polynomial texture mapping and 3d representation. in international archives of photogrammetry, remote sensing and spatial information sciences,. isprs commission v symposium. newcastle upon tyne, united kingdom. available at: http://www.isprs-newcastle2010.org/papers/159.pdf. [11] http://pressandpolicy.bl.uk/press-releases/growing-knowledge---exhibition-enters-a-second-phase-4aa.aspx. [12] malzbender, t., gelb, d., wolters h. & zuckerman b. 2010. “enhancement of shape perception by surface reflectance transformation”, hp laboratories technical report, hpl-2000-38,march 2000. [13] wackrow, r., & chandler j. h. 2011 minimising systematic error surfaces in digital elevation models using oblique convergent imagery. the photogrammetric record. volume 26, issue 133, pages 16–31, march 2011 ________________________________________________________________________________ geoinformatics ctu fce 2011 103 aerial laser scanning in archeology martina faltynova*, karel pavelka* *department of mapping and cartography czech technical university in prague, faculty of civil engineering thákurova 7/2077, 166 2ř, prague 6, czech republic martina.faltynova@fsv.cvut.cz pavelka@fsv.cvut.cz keywords: airborne laser scanning, dtm, shaded surface abstract: technology of aerial laser scanning is often well used for a dtm generation. the dtm (digital surface model) displayed in appropriate form, e.g. shaded surface, can be used as a data source for searching for archaeological sites. aerial laser scanning data acquisition is unfortunately too expensive for non-commercial projects. it can be solution to use the als data acquired primarily for another reason by public service. this data has in general lower density, than expensive custom-made data, but can be borrowed for research purpose in a limited size. we tested the data from the czech office for surveying, mapping and cadastre. the aim was to find, if it is possible to use data characterized by density of about 1 point/m2 for archaeological research. we used the dtm in form of shaded surface and inspect the data around few well known archaeological sites from different periods. it is also possible to use different outputs from the original dtm to better display terrain discontinuities, which could be caused by human activity. 1. introduction the als data seems to be appropriate tool for documentation or detection archaeological sites on a larger scale, unfortunately the data is generally too expensive to be commonly used for these purposes. in our research we try to use als data acquired by public service, i.e. data of lower density (about 1 point/m2), which is not as expensive as custommade data and can be even borrowed for free for students projects such as diploma thesis. the data was lent by the czech office for surveying, mapping and cadastre. 1. data the czech office for surveying, mapping and cadastre started in 2008 with project for terrain mapping using the als method. the aim of mapping is to get authentic and detailed dtm of czech republic. previous dtm in form of grid 10x10m is based on digitizing of contours zm 10 (base map 1:10 000), this contours were reached by topographic mapping and photogrammetry, its height precision is about 2-5m in a forested area, which is absolutely deficient for archaeological research. about 1/3 of area is currently covered by the dtm based on als, until 2015 the mapping should be complete. the standard deviation in altitude of model points is up to 30cm. we have used the dtm displayed as shaded surface. the data was prepared in sw scop++, the parameters of shining are: azimuth – 315°, height above terrain – 45°, pixel size is 1x1m. terrain break lines are highlighted by the method of shaded surface, which is suitable for archaeological research. remains of buildings and other terrain modification are characterized by terrain break lines, local tops or pits, which do not fit to local geomorphology. 2. mapping of the known sites we have chosen few well known archaeological sites to compare results from als data and results from some earlier mapping methods used by archaeologists. archaeological research fall into noncommercial sphere, which means that simply cheap methods like stepping were used for mapping. this methods were not very precise, especially while it was realized by non-professionals. you can compare visualization of remains of stronghold mrdice near pardubice, the fort was first mentioned in the will of heuman of mrdice in the early 14th century and had been probably left during the 15th century. there is obvious difference between the scheme from 1989 based on stepping and situation displayed in shaded surface. ________________________________________________________________________________ geoinformatics ctu fce 2011 104 figure 1: mrdice shaded surface and orthophoto figure 2: mrdice scheme 2 near village provodín there is about 1.5km long rampart along the hill dlouhý vrch, which can be clearly seen in shaded surface. the rampart became from war between prussia and austria in the mid 18 th century. in fact the rampart was first used later, in the early 19th century during french invasion. the rampart is up to 4m height and the hill is actually covered by beech forest. ________________________________________________________________________________ geoinformatics ctu fce 2011 105 figure 3: provodin shaded surface figure 4: provodin rampart ________________________________________________________________________________ geoinformatics ctu fce 2011 106 3. searching for undocumented sites the als data can be also used for more than mapping known sites more precisely. it can serve for searching for unknown historic sites remains of forts, barrows, etc. using shaded surface with resolution 1m, we are able to descry objects with size from about 10m. it is hard to differentiate small objects (e.g. barrows) from data noise (see fig. 5 probably robbed barrows in forest near village kojakovice, south bohemia). the chance to find such an object strongly depends on season, in which the data was acquired, and the vegetation cover. it is almost impossible to perfectly classify out the returns from dense deciduous forest in summer. figure 5: kojakovice unfortunately, the forests are almost last places, where it is possible to find some sites, which are not destroyed by an agricultural activity. an example of this are remains of fort in hvezdov (ceska lipa). there are evident remains of walls along the access way and two circumvallation with moat in between (see fig. 8). great part of this site is covered by young pine forest, with complicated the field survey. we have found no information about the fort, but the remains are drawn in maps of iind military survey (1836-1852). figure 6: hvezdov ________________________________________________________________________________ geoinformatics ctu fce 2011 107 figure 7: hvezdov iind military survey [5] figure 8: hvezdov view of the circumvallation there are more two examples of objects that can be found in als data with resolution of 1m. the first are remains of circumvallation on the hill certovina near mnichovo hradiste. and the second is a barrow in a forest above the city mlada boleslav. 4. conclusion the als can be excellent tool for archaeologists and other in history interested persons. in spite of the lower resolution compared to possibilities of commercial custom made data, the als data produced by public survey have a huge advance because of its price. after finishing of mapping by als project, the data will be available for whole region of czech republic. the resolution turns out to be sufficient for archaeological research in a large scale. there is a potential for using this data for research. ________________________________________________________________________________ geoinformatics ctu fce 2011 108 figure 9: mnichovo hradiste circumvallation figure 10: mlada boleslav barrow 5. references [1] hrady.cz. [online], [retrieved 2011-04-15]. available from web: [2] svoboda, l. encyklopedie ceskych tvrzi ii.díl k-r. argo 2000, p.662, isbn-10: 80-7203-279-8. [3] obec provodin. [online], [retrieved 2011-03-28]. available from web: [4] sedlacek, a. hrady, zamky a tvrze kralovstvi ceskeho i.-xv. praha 1998, nakl. argo, 2.vydani, isbn 80-8579484-5. [5] presentation of old maps covering the area of czechia, moravia and silesia. [online], [retrieved 2011-05-20]. available from web: [6] mapy.cz [online], [retrieved 2011-05-22]. available from web: http://mapy.cz [7] koska, b., pospíšil, j., štroner, m.: innovations in the development of laser and optic rotating scanner lors, xxiii international fig congress, munich, germany, isbn 87-90907-52-3, 2006. available on www: < http://k154.fsv.cvut.cz/~koska/publikace/publikace_en.php> [8] kuemen, t. koska, b. pospíšil, j.: verification of laser scanning systems quality. xxiii-rd international fig congress shaping the change [cd-rom]. mnichov: fig, 2006, isbn 87-90907-52-3 [9] kuemen, t. koska, b. pospíšil, j.: laserscanning for castle documentation. proceedings of the 23rd cipa symposium [cd-rom]. prague: čvut, fakulta stavební, katedra mapování a kartografie, 2011, p. 1-8. isbn 978-8001-04885-6 [10] koska, b. tezníček, j. : solving approximate values of outer orientation parameters for projective transformation. proceedings of workshop.2008. praha: czech technical university in prague, 2008, vol. a, p. 162163. isbn 978-80-01-04016-4 http://mapy.cz/ http://k154.fsv.cvut.cz/~koska/publikace/publikace_en.php web service based on geotools in the atlas of fire protection jan růžička institute of geoinformatics vsb – tu of ostrava jan.ruzicka vsb.cz keywords: webservice, geotools, atlas, fire protecttion abstract the paper describes a simple way of a systems integration based on soap web services. the systems integration described in the paper is covered by a system named the atlas of a fire protection. the atlas is a set of dynamically created maps published in www browser. technologies used for the solution are umn mapserver, arcims, php, geotools and axis. geotools and axis are used for building platform and programming language independent component for the atlas. the paper describes a software architecture used for the atlas and a role of a web service in the integration. introduction it was a few years ago when my colleague ask me if there is a way how to integrate external component for the atlas that allows basic data classification. the atlas was completely build on php and php/mapscript. we could not find any php based component for this purpose. we have found a java based component geotools lite. there were a few ways how to integrate this component to the atlas. after some research we have found the way of web service based on soap (simple object access protocol) the most flexible and independent. our decision was probably right and after a few years when the system was rebuilded to support arcims and other tools, the only part based on geotools stayed without any change. a month ago one turkish colleague ask me about documentation of geotools used for building web services. unfortunately i did not have any at that time. i have decided to write this paper to fill this gap on the internet. i hope that somebody will find this paper useful. the atlas of a fire protection the atlas development started in 2004 year with support of general directorate of fire rescue service of the czech republic. the atlas is a web based interactive application that enables a geinformatics fce ctu 2009 67 růžička j.: web service based on geotools in the atlas of fire protection map creation based on user specified conditions. the atlas is based on a statistical database about incidents in a scope of fire brigade field of action. that’s why there are two basic ways of thematic map creation: � choropleth maps. � diagram maps. the proces of choropleth map definition starts with specification of the following conditions: � interval of time from/to of events, � incident type (e.g. fire-fighters injured during interventions), � statistical method for generating class intervals (jenks, equal interval, etc.), � number of classes, � type of frequency (square km, population), � colour scale for classes visualization. an example of a possible result of the choropleth map creation is figured at fig. 1. figure 1: choropleth map, screenshot from peňáz 2009 the statistical method the atlas was based on umn mapserver resp. on php/mapscript library. authors of the atlas could not find any method for data classification available in used library. there were two ways how to solve this problem: � implement algorithms, � use external component. geinformatics fce ctu 2009 68 růžička j.: web service based on geotools in the atlas of fire protection the external component has been found in spring 2004. at that time was named geotools lite 1.5. geotools lite geotools lite 1.5 library has been used for the atlas. geotools lite 1.5 library is not nowadays developed. developers can not download it from any official resource. there is available geotools lite follower currently named geotools 2. geotools 2 the geotools is a set of libraries written in java language. the main features of geotools library are [1]: � ogc grid coverage implementation, � coordinate reference system, � symbology using ogc sld specification, � attribute and spatial filters using ogc filter encoding specification, � graphs and networks, � java topology suite (jts), � two renderers, � raster and vectors formats. geotools lite 1.5 geotools lite 1.5 is a library with basic features necessary for geodata manipulation. it consists of modules for: � geodata reading (raster and vector basic formats), � desktop rendering (including diagrams), � projections, � map algebra, � data classification. geotools lite 1.5 data classification for data classification are available three methods: � equal interval, � natural breaks (jenks), geinformatics fce ctu 2009 69 růžička j.: web service based on geotools in the atlas of fire protection � quantile. why not geotools 2? the reason why was not used the richer library geotools 2 instead of geotools lite 1.5 is a quite simple. the module for the classification was not included in geotools 2 at the time of the implementation. way of integration the atlas was based on php language and the library was based on java language. there were three possible ways how to integrate the atlas with the geotools lite: � use a source code of geotools lite 1.5 and rewrite it in php, � a native link of the java library from php using jni (java native interface), � write a software component based on the geotools lite 1.5 where the communication is based on an other way than jni. we believed that classification module will be included in the geotools 2 and that a future development of the module is expected at the time of the integration. that is why a rewriting of the code into php seemed as a wrong way. the technology named jni and its usage in php was (and still is) experimental and we have not any experience with this technology at that time. we have started our research of soa (service oriented architecture) in 2003 and at the time of the atlas implementation it was a hot topic of our research (and it still is). that’s why we have decided to use web services as a solution for building independent library based on geotools lite available from php. web services web services can be simply characterised with the following facts. � software components. � available via web (based on http). � simillar to rpc (remote procedure call). � based on request / response. � usualy based on xml (soap), but may be more flexible (rest) or simpler and hardcoded (wms). geoweb services geoweb services are services that we are dealing (delivering, manipulating, analysing) with geodata. there are several specifications for building geoweb services. well known are for example: geinformatics fce ctu 2009 70 růžička j.: web service based on geotools in the atlas of fire protection � web map service. � web feature service. � web coverage service. � catalogue web service. classification the carto support service (css) was written as soap based web service with four public methods: � getequalintervalbreaks � getnaturalbreaks � getquantilebreaks � getcapabilities the first three of the methods are used for classification. all of them have the same arguments: string that contains numbers (float or integer) delimited with semicolon and integer that holds number of requested classes. the methods returns string that contains class breaks delimited with semicolon. the fourth method returns xml with the service capabilities description. the description is based on ogc wms getcapabilities response. we had to deal with problems with soap implementation in php in the area of encoding of arrays and lists, that’s why the encoding was simply based on text delimited using semicolon. the soap interface for the css is based on axis library (apache foundation). axis library and the css is hosted on a tomcat servlet container (apache foundation). for web service deployment is used simple autodeploy mechanism that is offered by axis technology: � a source code of a service is copied into directory specified by axis with jws suffix in a name of the file. � when a tomcat servlet container is restarted axis servlet checks content of the directory. � if there is a change of the source code or if there is a new file, the axis servlet performs compilation of the source code and deployment. figure 2: autodeploy mechanism (jws) the source code geinformatics fce ctu 2009 71 růžička j.: web service based on geotools in the atlas of fire protection the source code of css is simple, because it uses geotools lite library and defines only interface for communication. import uk.ac.leeds.ccg.geotools.classification.equalinterval; import uk.ac.leeds.ccg.geotools.classification.naturalbreaks; import uk.ac.leeds.ccg.geotools.classification.quantile; import uk.ac.leeds.ccg.geotools.classification.simpleclassifier; import uk.ac.leeds.ccg.geotools.simplegeodata; import java.util.hashtable; import java.util.stringtokenizer; import java.io.*; public class cartosupport { public string getequalintervalbreaks(string data, int breakscount) { equalinterval ei = new equalinterval(new simplegeodata(gethashtable(data)), breakscount); return getbreaks(ei); } public string getnaturalbreaks(string data, int breakscount) { naturalbreaks nb = new naturalbreaks(new simplegeodata(gethashtable(data)), breakscount); return getbreaks(nb); } public string getquantilebreaks(string data, int breakscount) { quantile q = new quantile(new simplegeodata(gethashtable(data)), breakscount); return getbreaks(q); } public string getcapabilities() { try { fileinputstream fis = new fileinputstream("cartosupportcapabilities.xml"); int x= fis.available(); byte b[]= new byte[x]; fis.read(b); string content = new string(b); return content; } catch (ioexception e) { return "can not read xml for capabilities from file cartosupportcapabilities.xml"; } } private hashtable gethashtable(string data) { stringtokenizer st = new stringtokenizer(data, ";"); hashtable numbers = new hashtable(); int i = 0; while (st.hasmoretokens()) { numbers.put(new integer(i), new double(st.nexttoken())); i++; } return numbers; } private string getbreaks(simpleclassifier sc) { string breaks = ""; int ct = sc.getbincount(); for (int i = 0; i < ct; i++) { double br = sc.getbin(i).getupperexclusion(); breaks = breaks + br + ";"; } return breaks; } } testing css tomáš peňáz made comparison test on this classification module with arcgis classification module. there was (but not significant) difference. the geotools lite classification was confirmed as a suitable classification for the atlas. geinformatics fce ctu 2009 72 růžička j.: web service based on geotools in the atlas of fire protection the architecture the architecture is figured at fig. 3. when a user selects data for a visualisation and specifies a number of classes and a method for classification, the atlas requests via soap the selected method on css. the request contains data and a number of classes. the response contains classes breaks. the breaks are used with php/mapscript library to create requested map. figure 3: the architecture in 2005 – 2007 changes in 2007 the system was refactored to be able use arcims technology, this was requested from general directorate of fire rescue service of the czech republic. the architecture in a connection with the css stayed untouched. the breaks are used with arcims via an arcxml request to build the requested map. figure 4: the architecture in 2007 – 2009 other example there is described an another example how to use geotools for building geoweb services in the following chapter. simple attribute search geinformatics fce ctu 2009 73 růžička j.: web service based on geotools in the atlas of fire protection the geotools can be simply used for searching in geodata according to geometry or attributes. it is probably more efficient to use some database system (e.q. postgis) for building geoweb service of this type, but there can be situations where is not allowed to use whole database system. for example in a case when is necessary to optimise your data flow using service migrating mechanism. in that case the geotools can be very useful, because the service with data can be simple migrated to another server as one archive file. this example has been prepared for purpose of this paper. the method getparcelowner searches in simple esri shapefile according to parcel id specified as the attribute of the method. the method returns name and address of the parcel owner or exception text if there is not match with any id in the file. names and addresses of owners are simply stored as an parcel text attribute. package cz.vsb.gisak.ruz76.gt.examples; import java.io.file; import java.io.ioexception; import java.util.hashmap; import java.util.iterator; import java.util.map; import org.geotools.data.datastore; import org.geotools.data.datastorefinder; import org.geotools.data.featuresource; import org.geotools.factory.commonfactoryfinder; import org.geotools.feature.featurecollection; import org.geotools.feature.simple.simplefeatureimpl; import org.opengis.filter.filter; import org.opengis.filter.filterfactory2; public class cadastresearch { public string getparcelowner(string id) { try { file cadfile = new file("cadaster.shp"); map map = new hashmap(); map.put("url", cadfile.tourl()); datastore ds = datastorefinder.getdatastore(map); //open shapefile featuresource fs = ds.getfeaturesource("cadaster"); filterfactory2 ff2 = commonfactoryfinder.getfilterfactory2(null); filter filter = ff2.equal(ff2.property("id"), ff2.literal(id), false); //prepare filter on attribute id featurecollection col = fs.getfeatures(filter); //run filter to obtain feature collection iterator iterator = col.iterator(); if (iterator.hasnext()) { simplefeatureimpl parcel = (simplefeatureimpl) iterator.next(); //read first feature from collection return parcel.getattribute("owner").tostring(); } else { return "no match"; } } catch (ioexception ex) { return "ioexception"; } } } conclusion webservices are quite efficient tool to integrate different systems when you work in a synchronous environment with clearly defined interfaces. every beginer with programming is able to write own web service based on java, axis and geotools. geotools are just one of many available libraries to build geoweb services. geinformatics fce ctu 2009 74 růžička j.: web service based on geotools in the atlas of fire protection references 1. geotools (geotools 2009) http://www.osgeo.org/geotools 2. peňáz et. al. fire protection atlas of the czech republic. available in intranet only. (peňáz 2009). support the article is supported by czech science foundation as a part of the project ga 205/07/0797. geinformatics fce ctu 2009 75 http://www.osgeo.org/geotools ___________________________________________________________________________________________________________ geoinformatics ctu fce 233 improving completeness of geometric models from terrestrial laser scanning data clemens nothegger institute of photogrammetry and remote sensing, vienna university of technology gußhausstraße 27-29, vienna, austria cn@ipf.tuwien.ac.at keywords: laser scanning, modeling, cultural heritage, automation, documentation, triangulation abstract: the application of terrestrial laser scanning for the documentation of cultural heritage assets is becoming increasingly common. while the point cloud by itself is sufficient for satisfying many documentation needs, it is often desirable to use this data for applications other than documentation. for these purposes a triangulated model is usually required. the generation of topologically correct triangulated models from terrestrial laser scans, however, still requires much interactive editing. this is especially true when reconstructing models from medium range panoramic scanners and many scan positions. because of residual errors in the instrument calibration and the limited spatial resolution due to the laser footprint, the point clouds from different scan positions never match perfectly. under these circumstances many of the software packages commonly used for generating triangulated models produce models which have topological errors such as surface intersecting triangles, holes or triangles which violate the manifold property. we present an algorithm which significantly reduces the number of topological errors in the models from such data. the algorithm is a modification of the poisson surface reconstruction algorithm. poisson surfaces are resilient to noise in the data and the algorithm always produces a closed manifold surface. our modified algorithm partitions the data into tiles and can thus be easily parallelized. furthermore, it avoids introducing topological errors in occluded areas, albeit at the cost of producing models which are no longer guaranteed to be closed. the algorithm is applied to scan data of sculptures of the unesco world heritage site schönbrunn palace and data of a petrified oyster reef in stetten, austria. the results of the method’s application are discussed and compared with those of alternative methods. 1. introduction there are many different instruments on the market for performing terrestrial laser scannning (tls). these instruments vary considerably with respect to their measuring principle, accuracy, speed, range and purpose. therefore also the strategy for processing the data needs to be adapted to the advantages and disadvantages of the respective instrument. the choice of instrument is mainly driven by the scale of the objects to be scanned. small objects measuring from a few centimeters up to a few meters can be scanned using close range scanners. these scanners have a high accuracy, but also a limited range (several meters) and a limited field of view. it is therefore necessary to build the final point cloud by merging scans from many different scan positions. this can become very expensive if it is not automated. larger objects measuring from a few meters up to hundreds of meters can only be scanned efficiently using medium to long range scanners which have a wide field of view, typically panoramic, and a theoretical maximum range of up to hundred meters, sometimes even more. the accuracy, on the other hand, of these scanners is lower when compared to close range scanners, despite some improvements in recent years. the challenge when working with data from these instruments, compared to close range scanners, thus shifts from registering a multitude of individual scans to dealing with measurement errors, both random noise and systematic errors. in this paper we will exclusively focus on this latter class of instruments. while the point cloud by itself is sufficient for satisfying many documentation needs, it is often desirable to use this data for applications other than documentation. for these purpose a triangulated model is usually required. surface reconstruction from point clouds is a well studied problem and a great number of different algorithms solving it have been developed. it is also available in commercially available software packages such as rapidform, geomagic studio, or polyworks. most of them, however, work best when applied to data acquired using close range scanners, or when the data acquired using medium range scanners is of a relatively low resolution. however, when exploiting the full potential of medium range terrestrial laser scanners, i.e. using them at high resolutions, generating topologically correct triangulated models is still a challenging task, which typically involves much interactive editing. the main reason is that because of residual errors in the instrument calibration and the limited spatial resolution due to the laser footprint, the point clouds from different scan positions never match perfectly. under ___________________________________________________________________________________________________________ geoinformatics ctu fce 234 these circumstances many of the commonly used algorithms for generating triangulated models produce models which have topological errors such as surface intersecting triangles, holes or triangles which violate the manifold property [1]. in this paper we present an algorithm which significantly reduces the number of topological errors in the models from such data, when compared with the results of commercial software packages. the algorithm is a modification of the poisson surface reconstruction algorithm. poisson surfaces are resilient to noise in the data and the algorithm always produces a closed manifold surface. the surface will be closed even when the sampling of the surface is incomplete, e.g. because of occlusions. while this is a desirable property in many applications, for documentation purposes this arbitrary closing of surfaces can be problematic, especially since this closing might not even be topologically correct. our modified algorithm restricts the surface to sampled areas and thus these errors can be avoided, albeit at the cost of producing models which are no longer guaranteed to be closed. our modified algorithm is tested on a dataset consisting of a number of small point clouds, along with the original poisson surface algorithm and the algorithm available in the commercial software package geomagic studio 11. using these datasets we demonstrate, that our algorithm does indeed perform as designed, both in simple as well as in challenging situations. to demonstrate the applicability to large real world datasets it is applied to scan data of sculptures of the unesco world heritage site schönbrunn palace as well as data of a petrified oyster reef. 2. related work the data from terrestrial laser scanners consists of a set of point samples of all the surfaces within the line of sight of the scanner. because of occlusions the data from many scan position need to be merged into one point cloud. the resulting point cloud is unstructured, i.e. it does not contain any information about the topologically correct relations between the points. furthermore the point cloud usually contains both noise and systematic errors. therefore the accurate and correct reconstruction of the originally sampled surface from tls data is a challenging problem. there are two basic approaches to surface reconstruction. one approach is to utilize geometric properties of the point cloud. examples of this approach include algorithms such as alpha shapes [2], power crust [3], or vrip [4]. algorithms which try to construct highly generalized models by fitting geometric primitives also fall into this category. the common property is that an explicit description of the surface itself is constructed. the alternative is so-called implicit surfaces. here the goal is to find a function defined on the entire volume around the object. the surface itself is then defined as an iso contour of this scalar field. examples of this approach include the implicit surface framework described in [5], and poisson surfaces [6],[7], which are the basis of this work. 2.1 poisson surfaces the idea behind poisson surfaces is to look at an indicator function, which is zero outside the object and one inside. since this function is not continuous, it is smoothed by convoluting it with a gaussian function, or an approximation thereof. it can then be shown, that the true surface normal vectors are samples of the gradient field of the convolute d indicator function. since in reality the true surface normal vectors are unknown, the surface normals are estimated from the scanned point samples. the point samples contain measurement errors, thus the function that is sought is the one whose gradient field is closest to the gradient field estimated from the point cloud. the solution of this variational optimization problem is a poisson equation, thus the name poisson surfaces. the solution of the poisson equation is the convoluted indicator function, which is a scalar field defined in the entire volume enclosing the object. to compute the solution this volume is discretizised into voxels. the final surface can be located with sub-voxel precision; nevertheless a fine discretization is needed for good results. because of the enormous amount of memory that would otherwise be necessary this needs to be done using an octree, a data structure enabling that the full resolution is only used close to the surface. this does not impact the accuracy of the solution, since away from the surface the indicator function is constant. the achievable spatial resolution is determined by the maximal depth of the octree, and thus depends on the point density and the smoothing function used. 2.2 surface normal vectors to estimate the surface normal vectors it is necessary to estimate the tangential plane for each point of the surface. the most common and straightforward approach is to perform a least squares fitting of a plane to a compact neighborhood of sample points. for homogeneously sampled surfaces, as it is typically the case with close range data, this works well. for panoramic tls instruments, however, the point density is usually very inhomogeneous due to the wide range of measured distances. to get consistent results w.r.t estimation error and smoothing properties it is necessary to locally adapt the size of the neighborhood [8]. in practice the data from tls contains a quite significant number of outliers. for instruments utilizing the phase-shift measurement principles the main source of outliers is the fact that the laser beam can simultaneously illuminate more than one surface. in this case the measured distance is an average of the distances to the illuminated surfaces and thus does not correspond with any existing surface. this always happens at the silhouette of objects. other sources of outliers are glossy surfaces, which under certain angles can either saturate the detector due ___________________________________________________________________________________________________________ geoinformatics ctu fce 235 to their high reflectivity, or lead to multi-path effects by reflecting the laser beam like a mirror. even in the absence of such effects, it is also not guaranteed that all the points within the neighborhood used for estimating the tangential plane lie indeed on the same surface. they might be beyond a sharp edge or on a parallel surface if the material is thin. these points can thus appear like outliers, when they are used to estimate the tangential plane for a surface they don't belong to. to deal with these situations robust estimation techniques can be used. robust estimators take much more time to compute and are also not optimal with respect to their statistical efficiency. however, they are unaffected by outliers and thus allow to identify outliers more reliably [9], thus making them a worthwhile addition to the processing toolbox. robust estimators that have successfully been used in this context include the fast minimum covariance determinante estimator [10],[11] and an estimator based on robustly adapted kernel density estimation [12]. 3. method and implementation the algorithm we present in this paper uses the poisson surface technique. however, we implemented it differently than described in [7]. the key difference between our implementation and the original implementation is that we do not solve the system globally, but rather split the system into small cubes and independently solve the systems locally and in a second step correct the errors that occur along the boundary of the cubes. the main advantage of our approach is that it is more easily parallelizable and can be implemented using data streaming. thus it is more suitable for very large datasets and scales better when run on clusters of machines. we achieve this localization by restricting the domain ω on which the indicator function χ is defined to areas close to the surface rather than the entire bounding box around the object. this is illustrated in figur 1. figure 1: illustration of the poisson surface algorithm: a) sample points, surface normal vectors, domain ω and domain boundary ∂ω, b) smoothed indicator function χ and c) iso-contour for the original implementation and d) sample points, surface normal vectors, domain ω and domain boundary ∂ω, e) smoothed indicator function χ and f) iso-contour for the modified implementation. restricting the domain in such a way has the disadvantage that the surface is no longer guaranteed to be closed and it raises numerical problems. there are two main reasons. the first is that the boundary can become quite complicated and secondly the dirichlet boundary condition ∂ω = 0 is only true on the outside and thus only applicable to the outer boundary. on the inner boundary the neumann boundary condition grad(χ) ∙ n = 0, where n denotes the normal vector of χ, must be used. unfortunately, a system of partial differential equations containing a neumann boundary condition along a complicated boundary is numerically challenging to solve, at least when using finite differences. to mitigate these difficulties we used the approach described in [13] and additionally used the dilate morphological operator on the voxel grid to smooth the boundary and move it away from the surface where these errors resulting from the discretization of the problem domain matter less. the dilate operator also enables the algorithm to close small holes which may exist in the sampling. the resulting system of partial differential equations is much too large to be solved directly. therefore iterative approximation solvers must be used. in [7] the octree used for the organization of the data is used to construct a multigrid solver. this approach profits from the fact that the solution is constant almost everywhere except near the surface. in our approach, however, the solution is undefined almost everywhere, except near the surface. therefore the multigrid approach cannot be used. instead we use a domain decomposition solver based on the additive schwarz method [14]. in his method the entire domain is divided into overlapping subdomains, and the results from ___________________________________________________________________________________________________________ geoinformatics ctu fce 236 solving the problem on one subdomain are added the other overlapping subdomains to compensate the errors. this procedure is iterated until the errors are sufficiently small. here the domain restriction is advantageous. since solution components with a low frequency are not possible the solution converges quickly even without using a coarse grid. 4. results we tested our implementation of the poisson surface algorithm on three datasets. the first is synthetic and consists of sample points on a plane and on a sphere with various levels of noise. the advantage of the synthetic data is that the ground truth is known and the performance of the algorithm can be assessed accurately. the other two datasets are from actual scan campaigns. 4.1 synthetic datasets this dataset consists of five small point clouds. they were constructed to evaluate the characteristics of the algorithm in situations which are commonly encountered in laser scanning data. the point cloud consists of 3600 points on a plane, i.e. the data is error free. in reality data is never error free, but data from close range scanners and data from a single low-resolution panoramic scan is usually quite close to this ideal. the second and third point cloud is the same points, but a gaussian range error was added, having a standard deviation of one and three millimeters, respectively. the spacing of the points is one millimeter in each direction. this might seem like an unrealistically large error, especially with respect to the point spacing. however, while this is true for a single scan, it is not unrealistic when multiple scans are combined. with modern panoramic scanners a point spacing of one to five millimeters and a noise level of 0.3 to 0.5 mm can be achieved even when keeping scan times down to a few minutes, assuming that a maximum scanning distance of 10 meters is not exceeded. the scan data from multiple scan positions will usually not match perfectly, since residual errors in the instrument calibration cannot be corrected using the rigid body transformation used in registering the scans. these are systematic errors which can be up to several millimeters in magnitude. when combining data from two or three scan positions a combined standard deviation of one millimeter and a point spacing of one millimeter is not uncommon. when using older instruments, e.g. when dealing with data acquired a few years ago, or when using current instruments under very unfavorable conditions, e.g. when using an unstable platform or very dark surfaces, the errors are even higher. the point cloud with three millimeter standard deviation is designed to mimic this situation. finally the fourth and fifth point cloud is 7200 points on a small sphere with a radius of 6 centimeters. again, one of the point clouds is error free; the other has an added gaussian error with a standard deviation of one millimeter in the direction of the surface normal. this dataset is designed to show the behavior of the algorithm on surfaces with a fairly high curvature. table 1 shows the results of comparing the vertices of the final surface with the known ground truth for the three surface generation methods. there is a small, but nonetheless significant, deviation from the expected mean distance for both poisson surface implementations, i.e. the surface estimator is biased. this is the case even for the error free dataset. the surface constructed with geomagic studio 11 does not show this bias. since for poisson surfaces the final surface is the iso-contour of a scalar field, it needs to be examined this bias can be reduced by calibrating the value of the iso-contour. poisson surfaces localized poisson surfaces geomagic studio 11 mean distance standard deviation mean distance standard deviation mean distance standard deviation perfect plane 0.016 0.019 0.014 0.019 0.000 0.000 noisy plane (σ=1mm) 0.037 0.082 0.082 0.090 0.005 0.746 noisy plane (σ=3mm) 0.246 0.136 0.147 0.265 0.004 2.245 perfect sphere 0.014 0.053 0.018 0.097 0.000 0.000 noisy sphere (σ=1mm) 0.028 0.190 0.126 0.343 0.023 0.799 table 1: comparison of absolute mean and standard deviation of the distances between surface vertices and the known ground truth for the three tested algorithms ___________________________________________________________________________________________________________ geoinformatics ctu fce 237 on the other hand, the noise reduction properties of the poisson surfaces are excellent. the residual noise in the surface is reduced by more than one order of magnitude when compared to the noise level of the original point cloud and there are no topological errors. our localized version of the poisson surfaces does not perform quite as well as the original implementation in this respect. this effect can also be seen in table 2, where images of the resulting surfaces are shown. table 2 also shows that for the localized version the errors are spread equally over the entire surface, whereas with the original implementation the errors are located mostly towards the edges of the surface, which bends away from the plane, whereas in the central part the errors are less. the reason is that for this point cloud the assumption that the points lie on a closed surface is not valid. it still needs to be examined why the localized version of the poisson surfaces does not perform as well as the original implementation w.r.t. the noise suppression. the most likely reason, however, is that we used a smaller, simpler but also less accurate discretization of the problem space. poisson surface localized poisson surface geomagic studio 11 noisy plane (σ=1mm) noisy plane (σ=3mm) noisy sphere table 2: visualization of the estimated surfaces for the synthetic datasets. it can also be seen in table 2 that when the noise level is relatively high, the surface reconstruction built into geomagic studio 11 completely fails to produce a useful surface, despite efforts on our part to utilize the built in noise reduction facilities. we want to stress that this is not a problem of geomagic studio in particular. the results would not look much different with other commercial software packages. dealing with this kind of data is very difficult and the only thing that helps is to reduce the point density, thus reducing the relative noise level. that, however, inevitably causes a loss of detail. this can also be seen with the oyster reef dataset. 4.1 schönbrunn attic sculptures datasets the second dataset consists of 400 individual scans of 50 attic sculptures of the unesco world heritage site schönbrunn palace. the sculptures were removed for restoration and scanned after the restoration was complete. for each sculpture a total of 8 scans were acquired from two height levels. a faro photon 120 scanner was used at medium resolution (¼ scans), but from a distance of no more than 5 meters, resulting in an average point spacing of 2 mm per scan. before the surface reconstruction the data was preprocessed and registered. the preprocessing stages consisted of outlier removal, random noise reduction, and surface normal estimation [1]. the outlier removal was performed primarily to eliminate the erroneous points that occur along the silhouettes of the arms and legs of the sculptures. after registration of the scans the surface reconstruction was performed using both the poisson surface algorithm and ___________________________________________________________________________________________________________ geoinformatics ctu fce 238 geomagic studio 11. figure 2 shows the result for one of the more elaborate sculptures. the model generated using the poisson surface algorithm is clearly visually more appealing and would not need barely any interactive editing if it were to be used for visualization purposes. in the surface reconstructed using geomagic studio 11 the problems in the data are still clearly visible. the roughness of the surface is the result of residual registration errors, which is especially pronounced in the face and on the outside of the arm which are seen from multiple scans. areas which are occluded in all but one scan, such as the part of the chest behind the arm, do not exhibit this roughness. the rims which are visible along the chest and neck are the result of outliers due to the silhouette effect, which were close enough to the surface points to pass the outlier filter. the poisson surface algorithm is capable of concealing these deficiencies in the data. figure 2: schönbrunn attic sculpture. a) entire sculpture b) reconstructed surface using poisson surface algorithm c) differences between poisson surface and original points d) reconstructed surface using geomagic studio 11 and e) differences of geomagic studio surface and original point cloud. when looking at the accuracy of the models, the situation is quite different, however. the difference model comparing the reconstructed surface with the points used to reconstruct the surface shows predominantly shade of blue for the poisson surface. this means that there is a systematic bias, a trait of the algorithm that was also present in the synthetic dataset and discussed there. if the geometric model is to be used for documentation purposes other than visualization, e.g. monitoring or change detection, such a bias is not desirable. the surface reconstructed with geomagic studio on the other hand almost exclusively shows cyan and yellow colors. this means that the surface never deviates more than 0.5 mm from the original points. 4.2 petrified oyster reef dataset the third dataset we used for our experiments consisted of 8 scans of the world's largest accessible petrified oyster reef located in stetten, austria. the exposed area is approximately 20 by 15 meters large, the scan positions are located along the boundary of the area. the challenge with this dataset is that the scans were acquired using the low resolution setting of the instrument (leica hds 6000) and from a very shallow angle, despite the size of the area to be scanned. the result is that there are large areas which have a point density which is significantly lower than one point per centimeter. due to the very shallow angles there are numerous outliers due to the silhouette effect, which are very hard to detect due to the low point density and which are an impediment for an accurate registration. furthermore, there is very little overlap between the individual scan positions due to their large distance and the shallow angles. the goal was to reconstruct a surface that accurately documents shapes of the petrified structures. the results are shown in figure 3. ___________________________________________________________________________________________________________ geoinformatics ctu fce 239 two models were constructed, one interactively by a skilled operator using geomagic studio 11 (3c), the other using the poisson surface algorithm (3b). for reference a model was constructed using only a single scan (3a). comparing the models it can be seen that the poisson surface model preserves more detail than the interactively edited model. however, when compared to the model determined from the single scan, it can be seen that the sharpness of the edges of the original is completely lost in both models. thus neither model fulfills the goal of documenting the petrified shapes. a high point density is a necessary condition if sharp edges are to be represented accurately. figure 3: the oyster reef data. a) triangulation of a single scan, b) poisson surface reconstruction of all 8 scans and c) interactively created surface using geomagic studio 11. 5. conclusions panoramic terrestrial laser scanners are much more cost efficient as close range scanners when scanning objects which are larger than a few meters. however, when deriving surface models from this data for visualization purposes, currently much manual effort is needed to clean the surface of artifacts. in this paper we showed that by using the poisson surface algorithm it is indeed possible to significantly reduce this effort, provided that the point density of the combined point cloud is high enough. low point density leads to strong smoothing over sharp edges. the benefits are that residual errors in the registration and a few outliers can be tolerated. the main advantage of the presented methodology is that it produces a geometric model which can typically be directly used for visualization purposes without requiring further interactive editing. manual closing of holes is only necessary if the scan data has large uncovered areas. the field of smoothed surface normal vectors which is produced as part of the algorithm is ideally suited for the derivation of a high-resolution normal map for texture rendering. this is especially beneficial for realtime rendering, since for this application the polygon count of the model needs to be dramatically reduced. the determined surface, however, always exhibited a systematic deviation from the mean position of the points. future research is required to determine the cause of this behavior. as long as the reason is not known and this bias cannot be corrected, the use of this method for purposes other than visualization cannot be recommended. however, since for documentation purposes, the point cloud itself is usually sufficient, this does not prevent the method from being used for cultural heritage documentation, when the original point cloud is kept for reference. we used a modified implementation of the poisson surface algorithm which solves many local systems instead of a large global system. the advantage is that the localized version scales very well using parallel computation and data streaming can be used to keep the memory footprint low. another characteristic of the modified implementation is that the resolution can be chosen consistently and independently of the point spacing. the errors in the solution for the indicator function are slightly worse for the modified version, because of a less accurate discretization. this can be offset, however, by using a smaller discretization interval. the reduced memory footprint and improved parallelization of the modified implementation makes it possible to use the method for very large dataset consisting of many billions of points. 6. acknowledgements this work is funded by the österreichische forschungsförderungsgesellschaft ffg under project number 21275ř5. the data of the attic sculpture was provided by schloß schönbrunn kulturund betriebsgesmbh. the data of the oyster reef was acquired by leica geosystems austria gmbh and provided by naturhistorisches museum wien. ___________________________________________________________________________________________________________ geoinformatics ctu fce 240 7. references [1] p. dorninger and c. nothegger, “automated processing of terrestrial mid-range laser scanner data for restoration documentation at millimeter scale,” proceedings of the 14th international congress “cultural heritage and new technologies,” vienna, austria: 2009. [2] h. edelsbrunner and e.p. mücke, “three-dimensional alpha shapes,” 1řř4. [3] n. amenta, s. choi, and r.k. kolluri, “the power crust,” proceedings of the sixth acm symposium on solid modeling and applications, ann arbor, michigan, united states: acm, 2001, pp. 249-266. [4] b. curless and m. levoy, “a volumetric method for building complex models from range images,” proceedings of the 23rd annual conference on computer graphics and interactive techniques, acm, 1996, pp. 303-312. [5] h. hoppe, “surface reconstruction from unorganized points,” phd thesis, university of washington, 1řř4. [6] m. bolitho, m.m. kazhdan, r.c. burns, and h. hoppe, “parallel poisson surface reconstruction.,” isvc (1), g. bebis, r.d. boyle, b. parvin, d. koracin, y. kuno, j. wang, r. pajarola, p. lindstrom, a. hinkenjann, m.l. encarnação, c.t. silva, and d.s. coming, eds., springer, 200ř, pp. 67ř-689. [7] m. kazhdan, m. bolitho, and h. hoppe, “poisson surface reconstruction,” proceedings of the fourth eurographics symposium on geometry processing, eurographics association aire-la-ville, switzerland, switzerland, 2006, pp. 6170. [8] c. nothegger and p. dorninger, “3d filtering of high-resolution terrestrial laser scanner point clouds for cultural heritage documentation,” photogrammetrie, fernerkundung, geoinformation (pfg), vol. 1, 2009. [9] d.c. hoaglin, f. mosteller, and j.w. tukey, understanding robust and exploratory data analysis, wileyinterscience, 2000. [10] c. nothegger and p. dorninger, “automated modeling of surface detail from point clouds of historical objects,” 21st cipa symposium, anticipating the future of the cultural past, the internationa l archives of photogrammetry, remote sensing and spatial information sciences, athens: 2007, pp. 538 543. [11] p.j. rousseeuw and k. van driessen, “a fast algorithm for the minimum covariance determinant estimator,” technometrics, vol. 41, 1999, pp. 212-223. [12] b. li, r. schnabel, r. klein, z. cheng, g. dang, and j. shiyao, “robust normal estimation for point clouds with sharp features,” computers & graphics, vol. 34, apr. 2010, pp. 94-106. [13] b.j. noye and r.j. arnold, “accurate finite difference approximations for the neumann condition on a curved boundary,” applied mathematical modelling, vol. 14, jan. 1990, pp. 2-13. [14] a. toselli and o. widlund, domain decomposition methods, springer, 2004. trends in geoinformatics education trends in geoinformatics education markéta pot̊učková department of applied geoinformatics and cartography faculty of science, charles university in prague e-mail: mpot@natur.cuni.cz introduction recent trends in global positioning systems (gps), geographic information systems (gis), photogrammetry, remote sensing and communication technologies require changes in surveying and related educational programs dealing with geoinformation, such as geography, environmental engineering, forest engineering and geology. changes in structure and format of university curricula across europe within the past few years reflect this need as well as a diminishing number of survey engineering and geoinformatics students in some countries. multi-disciplinary education in information technologies (it), management or economics and geoinformatics can increase employment opportunities in some labour markets. the terms ‘geoinformatics’ or ‘geomatics’ are used interchangeably in some university programs. the definition of both terms has not been standardized to-date. in the context of this paper, geoinformatics is the science dealing with the development and management of databases of spatial data, their analysis, modeling and presentation, and development and integration of computer tools and software for solving these tasks. the term geomatics originated in canada in the 60’s includes disciplines of spatial data acquisition in addition (wikipedia). geoinformatics curricula at universities with a long tradition in geodesy and cartography education are usually built on solid training in mathematics, physics, programming, computer graphics, and web-applications. this paper focuses on educational models, not common in central europe (project-based learning at aalborg university, denmark and internationally oriented education at itc, the netherlands). current trends in geoinformatics education albeit specifics of some subjects to a given country, especially in cadastre and land management, common general trends can be identified (enemark, 2005, höhle, 2006): � as the amount of information increases, it is necessary to concentrate on ’core knowledge’ within each subject. students should be given references relevant to their field of interest and understand the technology and interpretation of results, while using modern, easy-to-use systems. � traditional focus on professional and technical skills (knowing how) is changing into educational models which expose students to modern technologies and teach how to keep the knowledge up-to-date and solve problems on a scientific basis. geinformatics fce ctu 2006 35 trends in geoinformatics education � learning management skills in order to meet the needs of customers and to adopt changes in technology developments in the context of a changing social and economic structures of the society. � adopting e-learning as part of a university education. lectures and exercises are supplemented with presentations, and interactive educational materials on the internet as well as on-line test and assessments. � offering geoinformatics courses to international students by teaching in english, as shown by some european universities (e.g. tu berlin, stuttgart university of applied sciences). this could be a possible mitigation measure to decreasing numbers of students enrolled in geoinformatics in some european countries. � in recognition of the concept of continuing professional development, or lifelong learning, universities do not concentrate on education of undergraduate and ph.d. students only, but offer refresher courses on different subjects and at different levels. the use of elearning enables enrolment of professionals as their time and possibility of attending courses are limited. forms of education in most central european countries education in land surveying and mapping has been established between the world wars. master degree programs take 4-5 years, postgraduate studies 3 years. the eu countries signed the bologna declaration in 1999. it recommends implementation of the three degree structure for university education, bachelor of science (b.sc., 3 years), master of science (m.sc., 2 years), and ph.d. (3 years). to promote student mobility, a system of academic credits (the european credit transfer system) has been recommended. traditional, course-based approach of university education is based on lectures and exercises offered each semester according to a curriculum schedule. most courses end with an examination. at some universities, two or three weeks of field work take place at the end of spring semesters. in 1970’s, a new approach, the problem based, or project based learning (pbl) was introduced and successfully implemented at the aalborg university in denmark and the university of aveiro in portugal (offers a b.sc. education in geo-information engineering). in this educational model, lectures and exercises still have an important role but take less time (about 50%). main focus is on project work. students have to find, define, and solve a problem within the course of the semester. their work is documented in a report and defended during the examination. the concept of the pbl is to establish a closer connection between research and education. teachers inform students about the latest developments during lectures and propose projects that are connected to their research topics. supervising projects outside the educator’s main research interest forces the teacher to update his knowledge in a broader context. geinformatics fce ctu 2006 36 trends in geoinformatics education examples of study programmes in geoinformatics education in geoinformatics differs from institution to institution, depending on scientific profiles of teachers and researches, traditions, resources, equipment, and capacity. three examples shown bellow illustrate show some of these differences. it is possible to compare how these universities reflect challenges mentioned in chapter 2. aalborg university aalborg university (aau) provides the sole educational program for chartered surveyors in denmark. education is carried out at the faculty of engineering and science, department of planning and development. there are three research groups responsible for curricula in land surveying, geoinformatics, spatial information management, and land management. prior to 2005, a one-level master degree programme was offered. following-up recommendations of the bologna agreement, a new study programme in chartered surveying has been established. the structure of this new programme is shown on figure 1 (aau cs 2006). student enrolment amounts to 20-40 students annually. figure 1: structure of the chartered surveying study programme at aau during the first year of the bachelor of science programme, students gain knowledge in mathematics, computer science, gis and also learn principles of project based learning. it is the educational model at the aau since its foundation in 1974. groups of 5-7 students work on assigned projects starting in the first semester. main topics of the 3rd to 6th semesters are spatial planning and land-use management, large scale mapping, land surveying, and cadastral management. students learn planning of urban and rural areas, land management, cadaster, gis, terrestrial measurement, gps, photogrammery, cartography (map geinformatics fce ctu 2006 37 trends in geoinformatics education projections) and principles of adjustment theory. assignments and projects reflect common tasks of land surveying practice. students can later choose one of three specialisations in the master of science programme. overview of the specializationes and some of the courses offered in the 7th and 8th semester is given in table 1. at the end of each semester, groups of 3-4 students present projects on a chosen topic within the theme of the semester as part of the examination. examinations are only from subjects relevant to the project. the 9th semester provides the possibility of professional development according to the students’ choice. students can study abroad, undertake project work in the private sector, choose other courses at the aau within the master of science programme, or start the final project. the study is concluded by an examination which includes defense of the final project, typically completed during the 10th semester in groups of 2-3 students. examples of recent projects include mapping from high resolution satellite images, mobile mapping using a linear laser scanner and determination of volumes by laser scanning data. specialisation measurement science spatial information management land management theme positioning gis theory and technology real estate 7th s. surveying and terrestrial laser scanning, global positioning systems, advanced photogrammetry, adjustment theory, statistics, spatial data libraries & data quality, methodology & science theory, system development, free study activity geospatial analysis i, geocommunication i, data security & copyrights, spatial data libraries & data quality, methodology & science theory, system development, digital administration property law, property economy, land administration systems, legal aspects of land administration, property valuation, spatial data libraries & data quality, methodology & science theory, digital administration 8th s. sensor integration in surveying, sensor integration in photogrammetry & remote sensing, data integration & image. analysis, coordinate transformation, object modelling from laser scan point clouds, free study activity geospatial analysis ii, geocommunication ii, gis/it implementation, social aspects of gis, standards & exchange formats urban management, local planning, urban growth management, eu legislation, legal sociology nature & environment protection geinformatics fce ctu 2006 38 trends in geoinformatics education two teachers usually supervise the projects. one or two other teachers from aau, experts from private sector or other universities together with supervisors examine the students. cooperation with experts from private companies and other institutions ensures that the quality of education of the future graduates’ meets the needs of the job market. students evaluate each course at the end of the semester and provide their views on course contents, teacher’s approach to the subject and students, teaching methods etc. to the academic board. all chartered surveying courses are taught in danish while discussions on the possibility of teaching in english are ongoing. information about the study programmes within the department of development and planning is available from the aau homepage (aau d 2006). itc the institute was established in 1950 in enschede, the netherlands, as the international training centre for aerial survey. research, education, and project services are provided by six departments: � department of earth observation science � department of geo-information processing � department of urban and regional planning and geo-information management � department of natural resources � department of water resources � department of earth systems analysis the main mission of the itc is ‘directed at capacity building and institutional development for and in countries that are economically and technologically less developed’, (itc 2006). over 17 000 students from over 165 countries have completed educational programmes at the itc since its foundation. itc offers courses in six specialisations: � geoinformatics � geo-information management � urban management � natural resource management � water resources and environmental management � applied earth sciences it is possible to choose from a number of courses in programmes of various durations and obtain certificates, diplomas and university degrees. short courses and individual modules, undergraduate and postgraduate diploma courses, master courses and master of science courses, distance, or refresher courses are offered. to have a comparison with study programs at the other two universities, the m.sc. degree course in geoinformatics is shown as an example. geinformatics fce ctu 2006 39 trends in geoinformatics education duration of the m.sc. is 18 months and main topics of study, taught in six programme modules are: � generation of framework data � visualisation and dissemination of geospatial data, � design and optimisation of production and dissemination systems, � spatial information theory, � information extraction, � web technology for gis and mapping two additional modules focus on research methods. students may also choose three elective modules within their field of their interest from various modules offered by the six departments. the last six months are dedicated to work on master thesis. english is the official language at the itc. detailed information can be obtained from the itc’s homepage (itc 2006). charles university in prague education in geoinformatics is offered at the faculty of science, department of applied geoinformatics and cartography (cu gis 2006). the department is a part of the institute for geography and its new curriculum was established in 2003. since 2005/2006, new bachelor and master study programs have been established. the bachelor degree programme ‘geography and cartography’ educates students in the field of geography, gis, and remote sensing. students choose their bachelor thesis according to their interest in the field of physical geography and geo-ecology, social and regional geography, geoinformatics or cartography. the bachelor study is concluded by the defence of a bachelor thesis and the bachelor state examination. students enrolled in the master degree programme ‘cartography and geoinformatics’ must pass an entry examination and are tested in their knowledge in cartography, remote sensing, and gis. they have to demonstrate motivation for study and research and propose a possible topic of their master thesis. it is expected approximately 20 students enrol in the programme each year. overview of master courses is given in table 2. students are trained in research of the given problem and finding their own solution as part of mandatory and elective labs and exercises. emphasis is on a scientific approach, literature reviews and research of other sources of information, formulating problems to be solved and looking for optimal solutions to the defined problems. during the 9th and 10th semesters, students work on their master thesis. their chosen topic fits their interest but its content and goals have to be approved by the thesis supervisor and the department head. the student has to defend the thesis and pass the final, state, examination. geinformatics fce ctu 2006 40 trends in geoinformatics education semester courses 1st science theory, modern cartographic methods, extraction of topographic information 2nd application of gis, extraction of information from rs data, interactive maps, theory of spatial information 3rd database design and management, web in gis and mapping 4th diploma seminar elective courses remote sensing and natural resources, history of cartography, gps, remote sensing project all courses in the bachelor and master programme consist of lectures and practical exercises. exercises are mostly held in computer rooms where students are asked to solve different tasks using available software packages. these tasks are more complex in master courses and usually require two to three weeks to complete. submission of all completed exercise tasks is the prerequisite for being admitted to an examination of the specific subject. to date, all courses are taught in czech while a proposed study plan for international students, in cooperation with other universities is discussed. e-learning new means of communication and information exchange via the internet impact geoinformatics education. presentations complemented by lecture notes (e.g. powerpoint slides) on the internet are expected by students nowadays. exercises, test questions and assignments are placed on the internet and answered via the internet. students and teachers can communicate via electronic conferences, chat rooms and discussion boards and receive feedback to their work on-line. e-learning programs can be used as supplementary study and training materials to lectures and exercises of existing courses. this is called the blended learning technique and may bring higher efficiency and better results to the learning process. all these tools are necessary for distance education and increase in significance in the lifelong learning programmes. software packages such as moodle, blackboard, webct enable efficient implementation of elearning courses. first, available tools for creating the course structure (setting the modules, calendar of events) are used. second, the content of the course is defined (e.g. attaching different documents, linking external web-sites). third, tools for checking the progress of students is added (multiple choice tests, assignments). fourth, communication tools enable discussions between the teacher and student and among students. e-learning is one of topics discussed by education commissions of professional organisations such as isprs, ica, or eurosdr. for example, the eurosdr activity ‘eduserv’, eurosdr’s education service, is offered to participants from national mapping and cadastral organisations, universities and the private sector. since 2002, some 3-4 e-learning courses reflecting the latest development in the field of spatial information sciences have been developed. main the topics usually correspond to recent eurosdr projects. participants first meet teachers and other participants via an introductory seminar. e-learning courses take 2 weeks per topic. during these two weeks participants study course materials, check the progress in their geinformatics fce ctu 2006 41 trends in geoinformatics education learning by self-tests, communicate with teachers and other participant in the course. at the end of the course they are asked to download, complete and submit a written assignment. after a successful completion of the course, the participant obtains a diploma. eduserv is an example of a programme which contributes to continuing professional development. development of e-learning courses, developed for a master or bachelor programmes, or as a form of distance learning, is a time consuming proposition. it should be re-usable in other courses and easily updated and function for a time period long enough to recover some or all of the creation costs (mooney and martin, 2004). conclusion new curricula in geoinformatics education have been established during the past few years. their changes reflect new technological developments and needs of the society where emphasis is on applications in environmental sciences, land management, urban planning, natural resource management etc. graduates should have a good knowledge of theory and methodologies used in spatial data acquisition, processing, analysing, and management. nowadays learning does not end by graduating from a university. new graduates must keep their life-long knowledge up-to-date. this presents a challenge to universities as new courses reflecting recent research and development at both undergraduate and graduate levels are required by the student and professional geoinformatics community. references 1. enemark, s.(2005): global trends in surveying education: and the role of the fig, azimuth vol. 43, nr. 3, pp. 19-21, issn 0728-4586 2. höhle, j.(2006): education and training in photogrammetry and related fields – remarks on the presence and the future, international archives of photogrammetry, remote sensing and spatial information sciences, commission vi, 9.p (to be presented at symposium “e-learning & the next steps for education”, will be published in summer 2006) 3. konecny, g.(2002): recent global changes in geomatics education, international archives of photogrammetry, remote sensing and spatial information sciences, commission vi, vol. xxxiv, part 6, 6p. 4. mooney, k., martin, a.(2004): the potential of elearning in the spatial information sciences a resource for continuing professional development, international archives of photogrammetry, remote sensing and spatial information sciences, commission vi, vol. xxxv, part 6, pp. 160-162 5. aau cs 20061 6. aau d 20062 1 http://www.landinspektor.dk/ 2 http://www.plan.aau.dk/indexuk.php geinformatics fce ctu 2006 42 http://www.landinspektor.dk/ http://www.plan.aau.dk/indexuk.php trends in geoinformatics education 7. aau l 20063 8. cu gis 20064 9. itc 20065 10. wikipedia6 3 http://www.lsn.aau.dk 4 http://www.natur.cuni.cz/gis 5 http://www.itc.nl 6 http://en.wikipedia.org/wiki/main page geinformatics fce ctu 2006 43 http://www.lsn.aau.dk http://www.natur.cuni.cz/gis http://www.itc.nl http://en.wikipedia.org/wiki/main%5c_page ________________________________________________________________________________ geoinformatics ctu fce 2011 89 photo-realistic 3d modelling of sculptures on open-air museums francesca duca1, miriam cabrelles2, santiago navarro2, ana elena segui2 and josé luis lerma2 1università di ferrara, dipartimento risorse naturali e culturali corso ercole i d'este, 32, 44100 ferrara, italy francesca.duca@gmail.com 2universitat politècnica de valència (upv). photogrammetry & laser scanning research group (gifle). department of cartographic engineering, geodesy and photogrammetry. c° de vera s/n, building 7i, 46022 valencia, spain (micablo, ansegil, sannata, jllerma)@upvnet.upv.es keywords: terrestrial laser scanning; cultural heritage; documentation; data acquisition; 3d modelling; visualization abstract: laser scanning is a high-end technology with possibilities far ahead the well-known civil engineering and industrial applications. the actual geomatic technologies and methodologies for cultural heritage documentation allow the generation of very realistic 3d results used for many scopes like archaeological documentation, digital conservation, 3d repositories, etc. the fast acquisition times of large number of point clouds in 3d opens up the world of capabilities to document and keep alive cultural heritage, moving forward the generation of virtual animated replicas of great value and smooth multimedia dissemination. this paper presents the use of a terrestrial laser sca nning (tls) as a valuable tool for 3d documentation of large outdoor cultural heritage sculptures such as two of the existing ones inside the “campus de vera” of the upv: “defensas i” and “mentoring”. the processing of the tls data is discussed in detail in order to create photo-realistic digital models. data acquisition is conducted with a time-of-flight scanner, characterized by its high accuracy, small beam, and ultra -fine scanning. data processing is performed using leica geosystems cyclone software for the data registration and 3dreshaper software for modelling and texturing. high-resolution images after calibration and orientation of an off-the-shelf digital camera are draped onto the models to achieve right appearance in colour and texture. a discussion on the differences found out when modelling sculptures with different deviation errors will be presented. processing steps such as normal smoothing and vertices recalculation are found appropriate to achieve continuous meshes around the objects. 1. introduction in the last decade, according to unesco charter on the preservation of the digital heritage 2003 [1], the resources of human knowledge or expression, whether cultural, educational, scientific and administrative, and other kinds of information are increasingly created digitally or converted into digital form from existing resources. nowadays great efforts are dedicated to improve digital documentation technology in order to transmit knowledge to our future generations. in fact, world‟s digital heritage is at risk of either being lost or being damaged. without any doubt, appropriate digital heritage recording techniques are required to measure the state and condition of objects, monuments and sites. there is a wide variety of techniques for three-dimensional measurement. the selection of the right technique should be based on the scale, the size and the complexity of the object [2,3]. photogrammetry and laser scanning are widely used to provide large number of measurements and are usually suitable for simple and complex objects depending on the approach. image-based photogrammetry, extracting 2d information from single imagery or 3d information from either stereoscopic plotting or automatic image matching techniques, provide geometric information and texture of the object's surface; range-based laser scanning, from a static position on the ground (terrestrial laser scanner) or from a moving platform such as an aircraft, unmanned aerial vehicle (uav), or mobile mapping systems are becoming widely used, namely in combination with imagery, and considered as an efficient alternative to traditional survey techniques. laser scanning is increasingly used to collect a large quantity of three-dimensional data in a short time, generating a point cloud with intensity values in a local coordinate system; additional information such as rgb values is usually provided by internal or external digital cameras. laser scanning is generally used to record surface information to generate not only 3d models but also 2d sections, profiles and plans. it contributes to improve either the mailto:francesca.duca@gmail.com ________________________________________________________________________________ geoinformatics ctu fce 2011 90 geometric study of the monuments “as are”, the rigorous analysis of complex sites, the understanding of complex shapes, and, last but not least, the dissemination on multimedia platforms [4,5]. recently many approaches suggest the integration of the different techniques to improve the resolution of the 3d model, the accuracy of the objects, the definition of geometries and/or the color enhancement [6,7,8,9]. the accurate recording in 3d of dimensions and shapes is essential in projects related to the restoration and documentation process. it allows users to explore the state of conservation of the structures, monitoring them over time. the complexity of multiple forms of archaeological sites and monuments and his cultural interest requires a high level of geometric detail and colour. nowadays, the technology is used for many different applications like archaeological site surveys [7,8], digital conservation and creation of digital 3d repositories [10,11], web geographic systems [12], etc. the integration of image-based and range-based solutions allow the generation of very realistic and accurate results in terms of geometry and texture, enabling to analyze shape and dimensions at high resolution. the 3d survey and modelling of complex objects at different resolutions is realistically possible preferably carrying out multi-resolution approaches, preferable for complex, detailed and large cultural heritage [13,14,15]. this paper presents the results achieved with a time-of-flight terrestrial laser scanner and an external off-the-shelf digital camera for the 3d documentation of two large sculptures placed in the open-air museum “museo al aire libre” inside the “campus de vera” of the universitat politècnica de valència : “defensas i” arcadi blasco pastor 2003 and “mentoring” stephen j. daly 2003 (figure 1). the entire area is easily accessible, full of sculptures and trees, acting as a corridor among departments, institutes and facilities throughout the university [16]. “defensas i” sculpture is like a tower with three sides, measuring approximately 3.5 m high and 2.5 m wide. the grayish surface is not smooth but rich in many intentional holes and incisions. at the top there are sets of sharp pyramids representing the tower‟s battlements. “mentoring” sculpture is like a big metal doll, approximately 5 m high and 2 m wide, characterized by very complex objects on his upper part, like spheres, rings, spirals and cones. a b c d figure 1: sculptures “defensas i” (a ,b) and “mentoring” (c, d) 2. data acquisition leica hds scanstation2 time-of-flight scanner (figure 2a), characterized by its high accuracy, small beam diameter and divergence and ultra-fine scanning resolution (up to 1 mm), was used for the survey. the detection range of this device is 300 m at ř0% and 134 m at 1ř% of albedo, with a field of view of 360° (horizontal) and 270° (vertical) [17]. artificial targets (figure 2b-e) were placed in the scene around the sculptures at different heights in order to facilitate the alignment of the point clouds acquired from different positions around the sculptures. a b c d e figure 2: : a) leica hds scanstation2 time-of-flight; b-e) artificial targets used for the survey ________________________________________________________________________________ geoinformatics ctu fce 2011 91 2.1 “defensas i” arcadi blasco pastor 2003 for this sculpture the point cloud was acquired from two opposite scan positions with a distance to the sculpture of 7.5 m and 4 m, respectively. the resolution of each scanning was set to 5 mm on the object‟s surface. a mean absolute error of 3 mm was achieved within the registration process. a point cloud of about 1400000 points (figure 3a) was used to create the 3d model. 2.2 “mentoring” stephen j. daly 2003 for this sculpture the point cloud was acquired from four scan positions with a distance to the sculpture of 14.9 m, 41.7 m, 51.7 m and 50.3 m, respectively. the resolution of each scan world was 5 mm on the object‟s surface. a mean absolute error of 2 mm was achieved within the registration process. a point cloud of about 1083670 points (figure 3b) was used to create the 3d model. 3. terrestrial laser scanning workflow 3.1 registration leica cyclone register 6.0 software was used to align all the scans in a single and common coordinate system. it was performed following a target-to-target registration, by means of the artificial targets, and a cloud-to-cloud registration, manually picking corresponding points in the clouds. point clouds were exported in the leica .pts interchange format. a b figure 3: point clouds: a) “defensas i”; b); “mentoring” 3.2 modelling 3dreshaper 5.3 software was chosen to carry out the 3d model creation: the most computational and time consuming step in the laser scanning workflow. the workflow included data cleaning, noise filtering, meshing, smoothing, joining parts and hole filling. due to the morphological complexity of the sculptures, each sculpture was divided into different parts and modelled separately with different parameters (figure 4). a b c d figure 4: sculptures decomposition: a,b) “defensas i”; c, d) “mentoring” data cleaning/noise filtering: the first step was removing noisy data, outliers, and all the unwanted parts from each point cloud. two different functions were used: a) noise reduction; and b) explode with distance. for the former, a 0.1% of the total number of points was deleted; for the latter, all the isolated points separated more than 1.5 cm were deleted. ________________________________________________________________________________ geoinformatics ctu fce 2011 92 meshing/triangulation: this step involved data triangulation to derive a 3d triangular mesh. a “good” mesh would be the one that keeps only the useful and valid points. two criteria are used to achieve this goal. first, a quality criterion based on the noise of the measuring system. with this first criterion the idea is to keep only the most “right” points, eliminating the points that are above or below a theoretical surface. second, a geometrical criterion: the system will keep points in the areas of high curvature based on a deviation error, which is the maximum distance between the theoretical surface and the triangulated irregular network. figure 5 displays the way both criteria work on an idealized undulated surface [18]. the effects of the parameterization with the deviation error are presented in section 4. figure 5: undulated surface after removing the noise and fitting the mesh to a maximum deviation error smoothing: owing to the roughness of the surface, the shape of the mesh was noisy. several smoothing values were selected for the different parts of the mesh. following this way, the appearance of the eventual mesh improved. joining the different parts: the different parts of the sculptures were joined to deliver single digital surface models. a summary of the different steps is shown in figures 6 and 7. a b c d figure 6: modelling phases for “defensas i”: a) meshing; b) smoothing; c) joining parts; d) holes filling a b c d figure 7: modelling phases for “mentoring”: a) meshing; b) smoothing; c) joining parts; d) holes filling ________________________________________________________________________________ geoinformatics ctu fce 2011 93 3.3 texturing 3dreshaper 5.3 software was used to drape the texture to the 3d models, using images from an off-the-self digital camera, the canon powershot g11 at maximum resolution (10 mpixel). the position, the orientation and the inner parameters of the camera were determined during the orientation performance after selecting a minimum of three homologous points between the 3d surface and the corresponding images (figure 8). as the digital camera was not calibrated beforehand, six to eight homologous points were measured on each picture in order to obtain the best orientation adjustment. a b c d figure ř: texturing the images onto the models: a,b) “defensas i”; c,d) “mentoring” the final results achieved for the two sculptures can be visualized in figure 9. the number of pictures draped onto the 3d model was three for “defensas i”; more images were needed for “mentoring” due to the complexity of the upper part (four for the main body and eight for the details). a b c d figure 9: final 3d models (a,c) and photorealistic 3d models (b,d): a,b) “defensas i”; c,d) “mentoring” 4. meshing and smoothing analyses one of the most critical steps for creating the 3d mesh is to choose the appropriate parameters regarding noise reduction and deviation error (fig. 5). noise reduction depends on the measurement noise of the laser scanner as well as on the sampling resolution. unless there is oversampling or obvious noise in the data acquisition, the critical parameter for modelling is the deviation error. figure 10 displays the point cloud and the image patch counterpart that was used to determine the best meshing parameters to model the sculptures. ________________________________________________________________________________ geoinformatics ctu fce 2011 94 a b figure 10: a)detail of the point cloud with intensity values; b) corresponding image for “defensas i” the different deviation error values selected for meshing as well as the reduction ratios achieved for the different solutions are presented in table 1. the reduction ration ranges from 32.7% up to 95.5%; option 1 with 0 deviation error means that the deviation error is not taken into account, i.e. 100%. therefore, whatever influence on the results will positively affect the output size of the mesh file, the more the higher the deviation error. the results of the different meshes after changing the deviation errors are displayed in fig. 11. table 1: effects of the deviation error on the number of output triangles option deviation error (mm) number of triangles reduction ratio (%) 1 0 70807 0.00% 2 1 47704 32.7% 3 2 25047 54.6% 4 3 11994 83.1% 5 4 5992 91.5% 6 5 3206 95.5% a b c d e f figure 11: meshes of the same patch obtained with different deviation errors: a) 0 mm; b) 1 mm; c) 2 mm; d) 3 mm; e) 4 mm; f) 5 mm as can be realized in figure 11, there is some noise in all the meshes, independently of the deviation errors. moreover, the noise does not correspond with the true surface of the sculpture. on the one hand, there are tiny stripes that cannot be modeled with the selected sampling. on the other, there are some blobs of mortar irregularly spread on the surface‟s material. therefore, smoothing was required to minimize the roughness (i.e. maximize the planar continuity in general) of the different surfaces (figure 12). in particular, normal smoothing with controlled vertices recalculation was carried out to improve the appearance of the meshes. ________________________________________________________________________________ geoinformatics ctu fce 2011 95 a b c figure 12: meshes after normal smoothing with different deviation errors a) 0 mm; b) 1 mm c) 5 mm there were small differences in magnitude before and after normal smoothing but the regular appearances obtained after smoothing were visually significant (cf. figures 11 and 12). comparing metrically the unfiltered meshes with the corresponding filtered meshes, the differences were relatively small. differences of ±1 mm yielded statistically equivalent values from 60.3% up to 90.3% for option 6 and option 1, respectively. in the contrary, maximum differences in the range of ±1-9 mm were computed with option 6 for 39.7% of the analysed area; for option 1, only ř.ř% in the range of ±1-3 mm. worth noticing is the small differences achieved with option 1 and 2 (figure 13-2b) where ř4.6% of the analysed area was up to ±1 mm. in fact, the difference images displayed in figure 13 (a, b and c) show basically noise. nevertheless, deviation errors equivalent to the error of the laser scanner yielded larger differences than expected for the patch area, minimizing the presence of the mortar among the pieces that shape the sculpture. a b c d figure 13: differences between meshes without and with filtering a) option 1-1 smooth; b) option 1-2 smooth; c) option 6-6 smooth; and d) option 6 smooth-2 smooth whereas noise is self-evident for option 6 on figure 13c, it is significant the average compensation of the noise after computing the differences between 6 smooth (figure 13c) and 2 smooth (figure 13d). the latter image displays the compensation of the noise coming from option 2 and option 6, yielding smaller differences in the range of ±1 mm for 71.3%, in other words, and improvement of 10% in average quality; only small number of peaks (2) reached up to 8 mm. after the research on the parameterization for meshing and smoothing, the recommended solution was option 2 after normal smoothing (figure 12b). this option yields a balance among the number of triangles, the reduction ratio of the number of triangles (32.7%) and the eventual shape of the output model. this set of parameters was extrapolated from the test patch to the modelling of the whole sculptures, “mentoring” and “defensas i”. however, option 6 did not yield as bad results as expected beforehand and only around the break lines the output mesh was over-smoothed. the strong reduction ratio of the number of triangles up to 95.5% should not usually be rejected to yield fast and apparent photorealistic 3d models whether the output model is fully texturized with high resolution imagery. 5. conclusions in this paper we have presented the results of a full 3d digitization of large contemporary art sculptures with a time -offlight scanner. the output results demonstrate the chances of using terrestrial laser scanners to record geometrically the shape of cultural heritage assets such as large and complex sculptures in gardens, as well as the power of 3d modeling based on range-based solutions plus digital imagery for texturing. nevertheless, the selection of the right parameterization for modeling is not a trivial step and should be carefully analyzed to yield acceptable metric ________________________________________________________________________________ geoinformatics ctu fce 2011 96 reconstruction. the improvement of the appearance on the surface of the objects based on imagery is thoroughly demonstrated herein with the two sculptures. without texturing, a higher resolution 3d model should have been recorded to improve the quality of the digital reconstruction. however, owing to both the size and shape of the sculptures, other optical solutions such as structure light systems or triangulated laser scanners would have required both longer acquisition times and additional equipment to digitize the upper parts of the sculptures. the high quality photorealistic 3d models can be used for web dissemination activities, to improve social awareness of cultural heritage value, for restoration and even for monitoring of the state of the objects over time. 5. acknowledgements the authors gratefully acknowledge the support from the spanish ministry of science and innovation project har2010-18620. 6. references [ 1 ] united nations educational, scientific and cultural organization: guidelines for the preservation of digital heritage, http://unesdoc.unesco.org/images/0013/001300/130071e.pdf, 2011-04-15. [2 ] böhler,w. et al.: 3d scanning and photogrammetry for heritage recording: a comparison, 12th international conference on geoinformatics geospatial information research: bridging the pacific and atlantic, universidad de gävle, suecia, junio 2004, 2ř1-298. [ 3 ] patias, p.: cultural heritage documentation, international summer school “digital recording and 3d modeling”, aghios nikolaos, crete, greece, 24-29 april 2006. [ 4 ] lerma garcía, j.l., van genechten b. , santana quintero m.: 3d risk mapping. theory and practice on terrestrial laser scanning. training material based on practical applications. universidad politécnica de valencia, valencia 2008, 261 pp. [ 5 ] english heritage: 3d laser scanning for heritage. advice and guidance to users on laser scanning in archaeology and architecture http://www.helm.org.uk/upload/pdf/publishing-3d-laser-scanning-reprint.pdf , 2007. [ 6 ] cabrelles n., galcerá s., navarro s., lerma j.l., akasheh t., haddad n.: integration of 3d laser scanning, photogrammetry, thermography to record architectural monuments, proceedings of cipa 2009, kyoto, japan, 11-15 oct. 2009, pp. 6. [7 ] al-kheder, s., al-shawabkeh, y., and haala, n.: developing a documentation system for desert palaces in jordan using 3d laser scanning and digital photogrammetry, journal of archaeological science, 2009, 36, 537-546. [ 8 ] lerma j.l., navarro s., cabrelles n., villaverde v.,: terresrial laser scanning and close range photogrammetry for 3d archaeological documentation: the upper palaeolithic cave of parpallò as a case of study, journal of archaeological science, 37 (3), 2010, 499-507. [ 9 ] remondino f., rizzi a.: reality-based 3d documentation of natural and cultural heritage sites-techniques, problems, and examples, appl geomat (2010) 2, 85-100 [ 10 ] ikeuchi k., miyazaki d.: digitally archiving cultural objects. springer, new york, 2008, 503 pp. isbn: 978-0387-75806-0 [ 11 ] ruther h.: documenting africa’s cultural heritage, proceedings of vast 2010, archaeology and cultural heritage, paris, september 2010. [ 12 ] apollonio, f.i., corsi, c., gaiani, m., baldissini, s.,: an integrated 3d geodatabase for palladio’s work. int. journal of architectural computing, 2010, 2(8). [ 13] ikeuchi k., oishi t., takamatsu j.: digital bayon temple e-monumentalization of large-scale cultural-heritage objects, proceeding of asiagraph 2007, shanghaii, 2007, 1 (2), 99-106. [ 14 ] levoy m, pulli k, curless b, rusinkiewicz s, koller d, pereira l, ginzton m, anderson s, davis j, ginsberg j., shade j., fulk d.:the digital michelangelo project: 3d scanning of large statues, proceedings siggraph 2000, new orleans, louisiana, usa, july 2000, 131-144. [ 15 ] remondino f., girardi, s., rizzi a., gonzo, l.: 3d modeling of complex and detailed cultural heritage using multi-resolution data, journal on computing and cultural heritage 2(2009)1, 2:1-2:20 [16 ] universidad politécnica de valencia: campus en tercera dimensión: (primera fase), universidad politécnica de valencia, valencia 1993, 73 pp. [ 17 ] leica geosystems: leica scanstation 2 product specifications, http://hds.leicageosystems.com/downloads123/hds/hds/scanstation/brochures-datasheet/leica_scanstation%202_datasheet_us.pdf, 2011-05-19. [ 18 ] 3dreshaper 5.3 tutorial, 2010, http://www.3dreshaper.com. http://unesdoc.unesco.org/images/0013/001300/130071e.pdf http://www.helm.org.uk/upload/pdf/publishing-3d-laser-scanning-reprint.pdf http://www.3dreshaper.com/ ________________________________________________________________________________ geoinformatics ctu fce 2011 74 the historical foundations. historical architectural treaty how information source of the architectonic heritage fernando da casa, ernesto echeverria, and flavio celis. * *school of architecture, research group: “intervención en el patrimonio y arquitectura sostenible”. university of alcalá de henares, c/ santa ursula ř. alcalá de henares. madrid, spain fernando.casa@uah.es keywords: architectural heritage conservation, historical architectural treaty, historical foundation. abstract: in order to address architectural heritage conservation, we must be familiar with the medium with which we will be working, its function and response to incidents or external actions (natural or anthropogenic) and how the buildings were conceived and constructed in order to understand how they will be affected by the intervention process to which they will be subjected and adopt the adequate measures so these processes will not harm the buildings. an important element is the foundation. this is a fundamental, yet often forgotten, element. it is important to know the history of the foundations, how and why they were constructed and for this, it is essential to study architectural treatises as the origin of their design. it is surprising to read classical architecture treatises and observe that they do not refer to calculations of dimensions, but to constructive solutions that today may seem clever because they are obvious, but in reality, they do not address the thoughts of the designer or builder. the historic architectural treatises on construction that significantly influenced spanish construction, which we studied and will present in this article, include vitruvius and palladio as well as the developments in the eighteenth and nineteenth centuries, and even into the first half of the twentieth century: (vitruvius (1st century bc), palladio (1524), alberti (1582), cristóbal de rojas (1598), fray laurencio san nicolás (1639), brizguz y bru (1738), rieger (1763), fornes y gurrea (1841), espinosa (1859), marcos y bausá (1879), ger y lobez (1898) and barberot (1927). 1. introduction interventions focused on the conservation of historic buildings are a relatively recent reality in the social context, when there is concern to conserve the “historic memory” of the population, including buildings and monuments as a fundamental part of this memory. knowing how the foundation behaves is fundamental because it is the starting point of the building. historic foundations are even more important because they do not respond to the current types of concrete foundations built today [4]. in the past, alberti [2] warned: “so much of what happens below ground is unknown, that to entrust it with the responsibility of bearing a structural and financial burden can never be done without risk. and so, especially in the foundations, where more than anywhere else in the building the thought and attention of a careful and circumspect builder are required, nothing must be overlooked. a mistake elsewhere will cause less damage, and be less difficult to rectify and more easily borne, but in the foundations a mistake is inexcusable.” if we are working on an existing foundation without proper knowledge of this element, the problems can multiply. many interventions on this element have been completed without knowledge of the construction system on which it was based. as jacques heimman said, “the most impressive thing about historic buildings is that some of them still exist,” adding, “although they have been manipulated as they have.” 2. historic foundations and their uniqueness the conditions in which, historically, a foundation was initially planned could vary greatly. sometimes, these elements needed to be built on a variety of terrains, some even inadequate to support the loads these elements transmitted. their location was sometimes determined by aspects that had nothing to do with their constructive condition and there are references to “unhealthy” locations due to the supervision and defense of trade roads (as in the case of calatrava la vieja). all of this implies that the terrain conditions could be inadequate for building [4]. another factor was the material used to build these foundations. it was not considered an important factor and materials were usually low ________________________________________________________________________________ geoinformatics ctu fce 2011 75 quality and heterogeneous, and degeneration processes were not controlled. in addition, it was impossible to carry out regular maintenance operations. the lack of geotechnical knowledge was notorious, because geotechnics is a modern science. it was a process of trial and error. when we are going to work on a historic foundation, we may encounter conditions in which to do so. t his will depend on the foundation itself, as well as other aspects, including the building‟s characteristics and historic nature [3,5]. we must know how to look, as victor hugo said, “how sad to think that nature speaks and mankind does not listen,” and using a medical metaphor, therapy should be based on a good diagnosis. this diagnosis is not found, but looked for. in this case, it is important to look for the sources. for this, it is important to know the history of the foundations, how and why they were built in a certain manner. it is important to study architectural treatises as the origin of design. in addition, it is surprising to see that these historic architectural treatises (from vitruvius to benito bails, including alberti) do not make reference to dimensions via calculations, but present guidelines for construction solutions that we might consider surprising today. 3. historic sources: architecture and construction treatises historic architecture – construction treatises that have significantly influenced construction in spain include the following: 1st c. bc the ten books of architecture marcus vitruvius pollio [1] 1524 the four books on architecture andrea palladio [6] 1582 the art of building in ten books leon battista alberti [2] 1598 teoría y práctica de fortificación cristóbal de rojas [7] 1639 arte y uso de arquitectura fray laurencio san nicolás [ř] 1738 escuela de arquitectura civil a. genaro brizguz y bru [9] 1763 elementos de toda la arquitectura christian rieger [10] 1841 práctica del arte de edificar manuel fornes y gurrea [11] 1859 construcciones de albañilería p. celestino espinosa [12] 1879 manual del albañil ricardo marcos y bausá [13] 1898 construcción civil florencio ger y lobez [14] 1927 tratado práctico de edificación ëtienne barberot [15] concerning the analysis of the documents mentioned above in the parts where they develop aspects related to building foundations, we must indicate that these texts include precise instructions on how to estimate the size of foundations and build them. in the 1st century, vitruvius indicated [1]: “the foundations of these works should be dug out of the solid ground, if it can be found, and carried down into solid ground as far as the magnitude of the work shall seem to require, and the whole substructure should be as solid as it can possibly be laid. “houses which are set level with the ground will no doubt last to a great age, if their foundations are laid in the manner which we have explained in the earlier book, with regard to city walls and theatres. but if underground rooms and vaults are intended, their foundations ought to be thicker than the walls which are to be constructed in the upper part of the house, and the walls, piers, and columns of the latter should be set perpendicularly over the middle of the foundation walls below, so that they may have a solid bearing; for if the load of the walls or columns rests on the middle of spans they can have no permanent durability.” “above ground, let walls be laid under the columns, thicker by one half than the columns are to be, so that the lower may be stronger than the higher. hence they are called “stereobates”; for they take the load. and the projections of the bases should not extend beyond this solid foundation. during the renaissance, leon battista alberti [2] considered vitruvius‟ specifications and developed them further: “in setting out the foundations, it should be noted that the base of the wall and the plinth (which are also considered part of the foundations) must be somewhat wider than the proposed wall – for the very same reason for which people who walk through the snow in the tuscan alps strap to their feet rackets strung with cord, since the enlarged footprint will prevent them from sinking so much.” “finally, the whole base of the trench must be made absolutely level, without sloping in any direction, so that the imposed load will be distributed evenly; for it is in the very nature of weights to veer toward the lowers point.” “with rows of columns, then, there is no need to fill an extended trench with one continuous structure; it is better first to strengthen the seats or beds of the columns themselves and then to link each of these with an inverted arch, with its ________________________________________________________________________________ geoinformatics ctu fce 2011 76 back facing downward, so that the level of the area becomes its chord. the ready support given by the arches will help to prevent the ground from giving way under the different loads that converge on a single point from all sides.” in addition, we can find specific references to the importance of being aware of the nature of the subfloor in brizguz y bru, in 173ř, [ř ]in the second chapter of his book called “de los fundamentos y de algunas condiciones que se deben observar para firmeza y seguridad de los edificios” (on foundations and some conditions to be observed for the firmness and safety of buildings). he warned that “the most important knowledge an architect can have concerns the nature of the terrain, the depths of the land.” this sheds light on the importance that knowledge of the subfloor and the types of terrain had at that time. concerning foundations, vitruvius emphasised the importance of correct construction in order to avoid subsequent problems: “there is nothing to which an architect should devote more thought than to the structure of the foundations.” [1]. palladio, often citing vitruvius and alberti, indicated: “the foundations or base of a building is the part of its walls that is under ground and sustains the building‟s load. of the errors committed in the art of building, the most damaging are the ones made in the foundations, because they cause the ruin of the entire building and cannot be repaired without much difficulty and expense.” [6]. it is interesting to note that alberti [2] did not consider the foundations part of the building: “the foundations, unless i am mistaken, are not part of the structure itself; rather they constitute a base on which the structure proper is to be raised and built." he continues: “a foundation – that is to say, “a going to the bottom” – and a trench will be necessary wherever a pit must be dug to reach solid ground […] of stone, for example.” he confers great importance, however, on a structure‟s foundations: “so much of what happens below ground is unknown, that to entrust it with the responsibility of bearing a structural and financial burden can never be done without risk. and so, especially in the foundations, where more than anywhere else in the building the thought and attention of a careful and circumspect builder are required, nothing must be overlooked. a mistake elsewhere will cause less damage, and be less difficult to rectify and more easily borne, but in the foundations a mistake is inexcusable.” to conclude with this reference, in this quote, we may identify the possible reason for the absence of the details of the foundations in architectural projects. where the building edge meets the ground, we usually find a line and then simply the white of the paper, or at best, “a trench filled with stone.” figure 1. plan of the king's house in the hague. the simancas archive. it would now be interesting to recall siegfried giedion‟s statement concerning the split between science and technology on one hand, and art on the other, in reference to construction and architecture. this aspect is also reflected in these documents because we can see how the aesthetic and formal aspects are treated independently, with respect to the relationship to the construction of these aspects. for the casual reader, it may seem that the treatises only deal with one aspect or the other, depending on the volume in question. we should indicate that in 1763, rieger [4], in chapter iii, “list of the main parts of any building and its constitution in detail”, defines the foundation of a building and differentiates it, for the first time in such treatises, from the two parts that comprise it: “the part under the earth, or the base of all elements. the foundation is the first and deepest part of the building, on which everything else is based, and is divided into foundation and base. the foundation is the trench itself in which the deepest part of the wall is inserted. the base is the structure or construction that sustains the entire building.” it was the first time that the structural nature was specifically addressed. a foundation is a trench. until then, only the term foundation was addressed. espinosa (1ř5ř) [12] describes only two types of terrain: firm terrain comprised of “naturally compact earth including clay, stones and rocks,” or terrains that are loose and compactable, including “muddy terrain, or those with loose sand if not contained on all sides, plains that are not consolidated by the action of time, etc.” for this author, and those before him, it is possible to build on any of the types of grounds or terrains mentioned above, although it is necessary to correctly calculate the size of the foundations and correctly use various auxiliary systems to prevent future problems. marcos y ________________________________________________________________________________ geoinformatics ctu fce 2011 77 bausá (1ř7ř) [13], unlike the others who first examine the ground and then begin excavating, proposes the excavation first and then the subsequent examining of the ground to verify its suitability, in other words, “once you have found the terrain that seems firm, you must probe it to see if the nature of the ground changes as depth increases, using an iron probe or rod with one end toothed and a crossbar on the other, also in iron; the rod is inserted into the ground vertically, careful as to not bend it, having rubbed the teeth* with fat, and once it has been introduced completely, it should be rotated like a drill, then removed carefully so that the soil from the deepest parts remains stuck to the fat on the toothed end of the rod.” it is not enough, however, to know if the lower layers of the ground where the foundations will be located are consistent and resistant. it is also necessary to verify the existence or lack of shafts that could sink the foundations. for this, he again proposes two rebound tests, both already explained: “to verify the presence of underground shafts, you must hit the ground firmly with a wooden plank or “pisón”; you must then judge by the hollowness of the resulting sound. you may also use a bucket full of water placed on the ground with a small piece of paper on the surface of the water, placed so that the top of the paper does not get wet, then hit the ground firmly next to the bucket and if the water does not move and the paper remains dry, then the ground is firm.” in this last case, the traditional movement of the water was substituted by the wetting of the paper as an indicator of the firmness of the ground. ger y lobez (1898) [14] in his chapter “on foundations and grounds for building”, when he first addresses foundations, he divides them into two large groups depending on the existence or lack of water: “ordinary, if they are established in dry land where it is easy to excavate until sufficiently solid ground is reached, without the existence of water.” “hydraulic when these must be laid in wet ground or over springs or in terrain that is covered by water.” of the authors mentioned in this article, ger y lobez is the one that can be considered the first to base his calculations not only on experience, but also on certain scientific premises. despite this fact, it is quite easy to assume that these calculations are not valid for all types of grounds or terrain, or for all the different construction methods used to build foundations. in the first method and despite its apparent solidity, the size is calculated based on the resistance of the construction and not of the ground, so that the future structure could suffer unbearable loads coming from the building. in the second method, the size calculated would only be valid for the foundation building method for which it was established, packing or tamping compactable ground. but it should be taken into account that these terrains have average resistances, and this method would not work if they were of low quality or if the terrain were marshy. barberot (1927) [15]was the first author to openly show a clear and precise example of how to calculate the size of foundations, valid for all types of ground, because it relates the terrain‟s resistance to the approximate load that the structure will transmit to the ground. because of this, the terrain‟s resistance must be known and he included a table with approximate resistances in his treatise. 4. reference to special foundations it is interesting to find references to the application of “specific techniques” when working on unsuitable grounds or soils. for example, vitruvius mentions specific techniques for working on unstable or marshy ground: “if, however, solid ground cannot be found, but the place proves to be nothing but a heap of loose earth to the very bottom, or a marsh, then it must be dug up and cleared out and set with piles made of charred alder or olive wood or oak, and these must be driven down by machinery, very closely together like bridge-piles, and the intervals between them filled in with charred coal, and finally the foundations are to be laid on them in the most solid form of construction.” [1] figure 2: illustration from the ten books of architecture. vitruvius (1st century bc) ________________________________________________________________________________ geoinformatics ctu fce 2011 78 another similar reference possibly influenced by the previous text, concerns palladio [6] and his chapters on foundations (chapter vii “on the types of ground where foundations must be laid” and chapter viii “on foundations”). palladio explains how to build foundations on marshy ground, explaining: “[...]dig until a firm and solid base is found or, if this proves difficult, excavate some of the sand and gravel and put in piling until the points of the oak piles reach good and solid ground.” these techniques also apply to sandy or gravelly ground and around rivers or river beds. in this last case, he explains the procedure to be followed when the solid ground is located deep within the earth: “[...] then sink piles, which should each be an eighth of the height of the walls long and a twelfth of their own length thick. the piles should be driven so close to one another that no more could be inserted.” other texts mention similar procedures to the ones described above, although it is rieger‟s text (elements of architecture, 1763) [10] that makes references to new techniques, for example, “for building on soft sand, lay the beams horizontally on the ground,” as shown in the figure at the beginning of the previous section. there are various warnings concerning foundations laid in grounds of diverse consistency and recommendations include acting with the same caution as when building on poor quality terrain, as a first reference to the problem of differential settlement. he also makes references to foundations built with piles and inverted arches, as recommended by alberti and other authors, because the inverted arch was considered the better method compared to the ordinary arch, where the inverted arch has solid and resistant wall underneath it and the ordinary arch has nothing but air. figure 3: illustration xx. fig 7. christian rieger (1763) references to other techniques, like pile driving, are more developed in marcos y bausa‟s text, [13] in which the author indicates the driving of wooden piles “a plomo” (straight down, vertically), where the pile heads should be protected with an iron cap to prevent splintering, then placing wooden beams on top to create a strong base, along with the correct tamping of the interstitial materials in order to “build the foundations on this structure.” figure 4: illustration by marcos y bausá (1ř7ř) ________________________________________________________________________________ geoinformatics ctu fce 2011 79 this reference makes it clear that the new element is not considered part of the foundation itself, but more like the “treatment of the terrain” necessary in order to begin building the foundations. in celestino espinosa‟s treatise [12], we find the first references to the use of concrete in piles, but in a unique way, because once the wooden pile is inserted, “as it is being driven into the ground, it is turned a few times, then a bar is inserted in the small hole that opens up around the pile; an iron collar ensures that the head will not splinter. this way, the terrain wall is flattened and can be more easily removed. the concrete is poured into this hole.” already in the twentieth century, barberot develops a more complete method in his treatise concerning the techniques for pile driving, placing particular emphasis on the construction of the wooden elements, also indicating that “reinforced concrete piles are made that have very good results,” although the instructions he gives do not include specifications of the material used [15]. figure 5: illustration from the practical treatise on building (ëtienne barberot 1ř27) last, it should be indicated that the solutions for muddy ground are the same ones given in the first treatises concerning the building of strong bases with wooden structures. 5. conclusion it is interesting when, by chance, one searches for or reads documentation related to architecture treatises or the origins of constructive processes. the reader is often surprised to see issues that relate perfectly to modern times. these aspects of technique are the “ancestors” of our modern techniques, and the reasons for their use are perfectly reasoned and justified with “modern” assumptions. throughout history, man has often asked himself “why?” and this can be considered the beginning of the evolution of technology. once the “traditional” form was developed and applied, as time passed without the application of innovations in technology, those who used the traditional techniques “as they have always done”, did so because they did not know or had forgotten to ask themselves “why?” and this is basically one of the great problems of modern construction. many technicians do not know the origins of the systems they apply; in fact these origins are not part of our education. that is why it is important to return to our roots, learn from our ancestors, recall and remember our history. the use of historic treatises as a source of knowledge is the first step. 6. references [1] vitrubio, los diez libros de arquitectura, libro iii, cap iii. traducido por carmen andréu. ed. unión explosivos riotinto. madrid.1972 [2] león bautista alberti, los diez libros de la arquitectura, libro iii, cap ii, traducido por francisco lozano, alonso gómez. madrid 1582. [3] da casa, f. f., echeverría, e., celis,f., 2007. the intervention under soil level, the importance of its knowledge. the technique of the reinforced grouting. informes de la construcción. ed instituto torroja csic, madrid. spain ________________________________________________________________________________ geoinformatics ctu fce 2011 80 [4] da casa, f., echeverría, e., celis,f., 2002 las particularidades específicas del recalce de cimentaciones en el patrimonio arquitectónico.” libro de actas del i congreso del geiic. conservación del patrimonio. ed. grupo español del international institute for conservation of historic and artistic works. [5] da casa, f., echeverría, e., celis,f.,chias,p 2006. “las técnicas de intervención bajo rasante, la importancia de su conocimiento y difusión en la arquitectura.” ii jornadas de investigación de arquitectura y urbanismo iau06. ed upc. barcelona [6] andrea palladio, los cuatro libros de arquitectura 1524 [7] cristóbal de rojas, teoría y práctica de fortificación 15řř [8] fray laurencio san nicolás, arte y uso de arquitectura 163ř [9] genaro brizguz y bru, escuela de arquitectura civil 1738 [10] cristiano rieger, elementos de toda la arquitectura 1763 [11] manuel fornes y gurrea, práctica del arte de edificar 1ř41 [12] p. celestino espinosa, construcciones de albañilería 1ř5ř [13] ricardo marcos y bausá, manual del albañil 1ř7ř [14] florencio ger y lobez, construcción civil, 1řřř [15] ëtienne barberot, tratado práctico de edificación 1ř27 surveying curriculum from the point of view of multidisciplinarity václav slaboch clge, vice president for professional education vaclavs.vaclavs seznam.cz keywords: multidisciplinarity, modern curriculum for surveying education, role of clge, multilateral agreement of mutual recognition of qualification. necessity of a cpd system for surveying profession. summary the multidisciplinarity and globalization makes fade the differences among professions and surveying is no exception. clge – clge has a name in two of the many european languages english and french, namely “the council of european geodetic surveyors” and “comité de liaison des géomètres européens”. a “multilateral agreement” on mutual recognition of qualification in surveying was signed in 2005 in brussels by representatives of germany, france, belgium, switzerland and luxembourg. in 2006 also slovakia joint this agreement. the signature by the czech republic is recently under discussion. multidisciplinarity and globalization and their influence on surveying the multidisciplinarity in our profession is nothing new. a good example might be the famous italian surveyor an architect domenico martinelli (1650-1718) to whom an european research project financed from erdf (european regional development fund) has been dedicated. martinelli was not a mere surveyor, but also a diplomat, judge, valuer, professor and architect. by the way his multidisciplinarity is typical even for present italian surveyors. martinelli´s relations to czechia are represented namely through his diplomatic services for count kounic, during his diplomatic mission in as emperor’s ambassador in the dutch hague in 1698. one of the most famous architectonical project of martineli is the château slavkov by brno (austerlitz). partners of this project are, universität wien, faculty of philosophy of the masaryk university in brno, senate of the czech republic, liechtenstein museum vienna, u.s. embassy in prague and the town of rouśınov. it might be worth while considering to launch a similar interregional project dedicated to real estate cadastre covering the countries of the former austro-hungarian cadastre (czechia, slovakia, hungary, austria, croatia, slovenia, italy). in fact this co-operation already exists, but it might be enlarged by france, belgium and maybe even denmark. why this enlargement e.g. by france? because the geinformatics fce ctu 2007 9 surveying curriculum from the point of view of multidisciplinarity czech society of surveyors and cartographers is one of the founding members of the fgf (federation de géometrs francophone) and also because the so called napoleonian cadastre is based on the same principles as the former austro-hungarian cadastre. the new definition of the functions of the surveyor as adopted by the fig general assembly in athens in may 2004 is a typical example of a multidisciplinary approach to our profession. main points of the new definition of surveying profession as adopted by fig and clge: a surveyor is a professional person with the academic qualifications and technical expertise to conduct one, or more, of the following activities: � to determine, measure and represent land, three-dimensional objects, point-fields and trajectories; � to assemble and interpret land and geographically related information; � to use that information for the planning and efficient administration of the land, the sea and any structures thereon; and, � to conduct research into the above practices and to develop them. clge and their mission clge has a name in two of the many european languages english and french, namely “the council of european geodetic surveyors” and “comité de liaison des géomètres européens”. this has arisen because the business of the council is conducted in english, and the language of the european courts of justice is french. the accepted abbreviation in all languages is “clge”. clge was established at the féderation internationale des géomètres (f.i.g.). congress in wiesbaden in 1972 by the then 9 member states of the eec to consider the implementation of the treaty of rome in relation to the profession of geodetic surveying. clge represents in europe the geodetic surveyors in 23 states, which includes 22 member states and norway, switzerland too. the purpose and objectives of the clge the purpose of clge is to represent and to promote the interest of the geodetic surveying profession in the private and public sector in europe, especially to the institutions of the european union. the aim is to enhance the development of the profession both administratively, educationally and scientifically, to facilitate training, continuous professional development and mutual recognition and to promote the activities of geodetic surveyors as a highly qualified profession. liaison with the european commissions clge has nominated a liaison officer to open and maintain contact with the european commission. links have already been established with dg 12, 15 and 22. the executive board consisting of � president geinformatics fce ctu 2007 10 surveying curriculum from the point of view of multidisciplinarity � the president is chairman of the general assembly and the executive board � vice president for professional education � vice president for professional practice � vice president for european relations as permanent members: � secretary-general � treasurer multilateral agreement on mutual recognition of professional qualifications of public appointed surveyors on 24th october 2004 representatives of surveying profession from switzerland, germany, france, belgium, austria, luxembourg and denmark signed the following multilateral agreement. in 2006 this agreement was also signed by slovakia. preamble the basis of all commercial and financial activity is confidence in the security of the legal system relating to land and property. national constitutions protect ownership of land and property and subject commercial and financial activity to strict procedures of a formal nature. in central european countries, comprehensive and legally binding documentation relating to ownership of land and property is assured traditionally by the technical-legal system of “the land register – real estate cadaster”. the keeping of ownership and mortgage registers as well as real estate cadasters is a matter for the state. codified procedures for registration and changes to entries therein must be compatible in all three areas in order to assure the necessary security. the presumed correctness of registers is reflected in the real estate cadastre in the presumption of correctness of the cadastre entries in terms of location of the property boundaries and the designation of properties. furthermore, real estate cadastre contains much more information which is essential for the functioning of the state, for the commercial and scientific utilisation of the surface of the earth, for avoidance of hazards and for nature protection and for safety in planning and civil engineering. as changes in the registries of state and commercial interests arise in terms of place and time in a random fashion by their very nature, central european states have made use of the instrument of delegation of state functions for the past three hundred years. specially trained reliable persons such as notary-publics or publicly commissioned geometers have been selected for this function. the multiplicity of public and private laws and legal effects relating to land and property calls for impartiality, reliability and comprehensive technical and legal knowledge on the part of members of the profession. protection of ownership as a basis of an economy starts out very simply with confidence in the publicly commissioned geometer as an individual who, without regard to person, sets out the boundaries of the property and the rights accruing from the property and thus lays the foundation for the implementation of constitutional dictates for protection of ownership of land and property. the complexity of land rights in modern economies calls for thorough geinformatics fce ctu 2007 11 surveying curriculum from the point of view of multidisciplinarity command of technical and legal knowledge relating to land and property on the part of the geometer commissioned with these duties in order to be able to carry out this function. in central europe, public registers covering ownership are quite diverse. the complexity of regulations is increasing in modern economies because land as a resource is coming to play an ever more important role and is being circumscribed in diverse legal manners. the legal framework within which the commissioned geometer operates differs from one country to the next and continues to develop. a profession for public functions geometers commissioned to perform public functions are known in a number of countries: in france as géomètre-expert, in germany as öffentlich bestellter vermessungsingenieur, in belgium as géomètre-expert / landmeter-expert, in denmark as praktiserende landinspektører, in austria as ingenieurkonsulent für vermessungswesen, in switzerland as patentierter ingenieurgeometer,/ ingénieur géomètre breveté, in luxembourg as géomètre-officiell. with this delegation of functions, states pursue the aim of opening up public functions to competition and reducing costs as well as improving the effectiveness of public registers in the economy. in this context, the appointment of a highly qualified member of the liberal professions is an advantage for citizens in their function as consumers as they can select their service provider from the pool of competing, commissioned individuals all of whom are on an equal footing. it has been proven historically that the organizational form of the liberal professions, under state supervision in terms of personnel and specialist knowledge, or in selfadministration in conjunction with efficiency-based competition is the form which discharges public functions for state and for commercial interests most efficiently in the long term. in the above-mentioned countries, there are roughly 4,500 offices with one or several members of the profession. professional rules govern public appointment in the respective countries, the professions are based on legislation governing the particular profession which regulates duties assumed, entry requirements to and ethics of the profession. in addition, there are procedural laws which set out the professional scope of work. these relate to areas of real estate, planning and civil engineering. the core of the profession is to be found in being commissioned to perform surveying work in the respective property securing system. moreover, the profession rests on the following pillars: � cadastral survey and safeguarding property boundaries, privately commissioned � documentation of property surveying (i.e. keeping registers, measures, computations and maps, publicly commissioned) � national geodetic survey � geomatics / geoinformation / topography / hydrography � certification of facts relating to land and property geinformatics fce ctu 2007 12 surveying curriculum from the point of view of multidisciplinarity � work as technical expert reflecting the whole range of the professional formation � application of laws relating to land and property � real estate valuation these are public functions, referred to as “sovereign” in some countries; as a rule, a member of the profession is permitted to take on assignments in the context of private law, if such do not give rise to a conflict of interest with the legal independence of the said member. commissioning a member of the profession to carry out public work (delegation) is by government bodies or professional boards. in france, belgium and luxembourg, the géomètre-expert is the link from the property tax cadastre to ownership. in the german federal states, members of the profession also issue administrative acts in their own right, in austria, they issue official documents, in switzerland, they carry out surveying and administer the comprehensive official swiss surveying activities. in all countries the consultation of the consumer concerning the limitations of the real estate property due to the legal contents and geometry of the boundaries is the first duty of the profession. in no country are the numbers of members of the profession restricted by government or self-administration; on the contrary, once members of the profession have provided proof of the necessary qualification, they have a right to admission. in that way, newcomers can always enter the profession. in all signatory states, there is a general trend towards an increase in delegating state functions to commissioned free-lance geometers. entry to the profession and its european dimensions regulations relating to entry to the profession vary from one country to another, however they are very similar and are in terms of their nature essentially of the same kind. demonstration of attainment of the necessary qualification involves, apart from an academic course of study for surveyors (bac + 5u) the following general fields of study � administrative law, � land law, � building and planning law. university education provides a widely available and comparatively equivalent, defined knowledge base. thus, it has not been problematic to-date to perform straight-forward surveying techniques in a cross-border context (for a non-commissioned geometer or surveying technician). this happens very often. the education set out below thus relates to practice and legal issues arising in the particular national legal system. there is, however, no possibility of exercising the public functions of a commissioned geometer on a cross-border basis. the regulatory situation on the one hand and the de facto impossibility of mastering two legal systems in sufficient professional depth have precluded this to-date, not to speak of other formal obstacles. otherwise the right of public certification of facts is used in cross border service already now. nonetheless, the situation in the sector is in a constant state of flux, the legal systems are being investigated mutually and knowledge is disseminated transnationally over borders, not least by international associations such as geometer europas. in addition, the eu is geinformatics fce ctu 2007 13 surveying curriculum from the point of view of multidisciplinarity moving slowly in the direction of a harmonization of laws which also presupposes mutual understanding as a prerequisite. as this is of major importance economically, this initiative of the signatory states also has the objective of making knowledge of practice of the profession more transparent in a legal and technical context and of drawing up common european positions in this sector. the larger this block of common european knowledge is, the sooner universities and training institutions can take account of it. this makes it easier for those entering the profession to avail of the possibility of working in the european country of choice, with all well-known associated economic advantages. this likewise serves to achieve a stepwise harmonization of systems and their use to the benefit of the european population. illustration of the hitherto existing prerequisites for admission differences between countries having publicly commissioned geometers are, when seen in isolation, multifold and, in addition, vary between french and german speaking areas but, in summary, they produce geometers who can function well within the respective legal system. for the individual countries, the following regulations apply: country master professional practice (after bac+5) exam other right to being admitted france: bac1 + 5u2 2 p appraisal of work certificates: d.p.l.g. (in case of basic education at a state school) yes germany: bac + 5u 1 p after state examination state examination 2 r yes belgium: bac + 4u (in future: bac+5u) 0 p (2p) without ” convention“ denmark: bac + 5u 3 p selection based on work certificates yes austria: bac + 5u 3 p external examination yes switzerland: bac + 5u min. 18 months external examination yes luxembourg: bac + 5u 2 p (min 6 months in cadaster administration) examination legend: geinformatics fce ctu 2007 14 surveying curriculum from the point of view of multidisciplinarity bac + 5u: baccalaureat + 5-year study at a university / technical university p: on-the-job training in law and work practice in the respective country after university study r: compulsory monitored experience prior to the great state examination (two years) in various countries, legal professional regulations also allow holders of a degree from a technical university to enter, if they can provide evidence of relevant technical and legal knowledge (and a longer practical training). this is regulated by national law. current admission and examination institutions the current examination and admission institutions are state institutions or legally established boards country examination authority admission authority france: ministry of education (for dplg) ordre des géomètres experts germany: principally the oberprüfungsamt (higher examination office) frankfurt ministries of the federal laender belgium: communautés (vl+ w) assermente tribunal de 1ere instance conseils federaux des geometresexperts denmark: supreme land surveying authority supreme land surveying authority austria: federal ministry of economics and labour federal ministry of economics and labour switzerland: swiss examination commission epig executive federal council luxembourg: examination commission appointed by the minister ministry of finance mutual recognition of qualifications for admission to the profession the signatory states mutually recognize the qualifications for admission to the profession of a european geometer and agree on a procedural modus for ensuring unimpeded migration of members of the profession – irrespective of the directive 89/48 – on the following basis: training as a graduate engineer (bac + 5) in surveying or master (if compatible) is recognized automatically as an educational foundation. in additional, each candidate must acquire necessary country-specific extra qualifications in the area of � administrative law, � land law, � building and planning law. qualifications for admission to the profession shall then be based on a common, general platform diploma-engineer / european master (bac + 5u) + 2r/p + e geinformatics fce ctu 2007 15 surveying curriculum from the point of view of multidisciplinarity treatment of migrants after obtaining qualification in a signatory state the signatory associations support such migrants to provide them with the opportunity of evidencing the required knowledge in � administrative law, � land law, � building and planning law. of the respective country and of fulfilling the european platform requirements in terms of legal admission regulations. the admitting member states might value the elements of the formation and experience of the migrant and give him the choice between training seminar and an examination to prove his acceptability. references webliography and bibliography 1. internet homesite of the clge – www.clge.eu 2. internet homesite of the fig – the international federation of surveyors – www.fig.net 3. fig definition of functions of the surveyor1 as adopted by the fig general assembly on 24 may 2004 4. eu interreg project domenico martinelli2 (1650-1718) italian surveyor and architect 5. zeměměřič a czech monthly for surveying, cadastre and cartography3. the new definition of surveying profession – illustrated 6. internet home site of the vugtk – www.vugtk.cz – the research institute of geodesy, topography and cartography at zdiby, with the land surveying library where the originals and the czech translations of majority of the cited documents are now available. 7. otmar schuster: multilateral agreement. proceedings from the international clge conference at the rm in brussels, 1-2 december 2005. vúgtk, volume 52, publication no. 39, isbn 80-85881-14-4 8. stig enemark, tom kennie: cpd – continuig professional development and its future promotion within figin surveying. fig publication no. 15. 9. vaclav slaboch, frantisek krpata, josef weigel: surveying education in the czech republic 10. otmar schuster: surveying market in europe. clge publication, 2004 11. vaclav slaboch, jean-yves pirlot(edts): european professional qualifications in geodetic surveying, proceedings of the clge international conference 2005, brus1 http://www.fig.net/general/definition.htm 2 http://www.domenico-martinelli.com/index en.html 3 http://www.geos.cz/resort/definice.htm geinformatics fce ctu 2007 16 http://www.clge.eu http://www.fig.net http://www.fig.net/general/definition.htm http://www.domenico-martinelli.com/index_en.html http://www.geos.cz/resort/definice.htm http://www.vugtk.cz surveying curriculum from the point of view of multidisciplinarity sels/belgium, 1-2 december 2005, vúgtk, volume 52, publication no. 39, isbn 80-85881-14-4 12. vaclav slaboch: impacts and challenges of the eu membership on surveying profession in the czech republic. in: 1st international trade fair of geodesy, cartography, navigation and geoinformatics, geos 2006, conference proceedings, milan talich (ed), vúgtk, volume 52, publication no. 40, isbn 80-85881-25-x. geinformatics fce ctu 2007 17 geinformatics fce ctu 2007 18 database for tropospheric product evaluations implementation aspects jan douša, gabriel győri research institute of geodesy, topography and cartography, geodetic observatory pecný ústecká 98, zdiby 250 66 jan.dousa@pecny.cz abstract the high-performance gop tropo database for evaluating tropospheric products has been developed at the geodetic observatory pecný. the paper describes initial database structure and aimed functionality. special focus was given to the optimizing effort in order to handle billions of records. evaluation examples demonstrate its current functionality, but future extensions and developments are outlined too. keywords: troposphere, zenith delays, database, gnss, radiosonde, meteorological data 1. introduction the potential of gnss observations for troposphere monitoring has been described in bevis et al. (1992). since that time various projects aimed for developing gnss-meteorology in europe. first benchmark of near real-time ground-based gnss tropospheric products – zenith total delays (ztd) – was provided within the cost action 716 – exploitation of ground-based gnss for meteorology and climatology (1999-2004). the extended routine analyses were supported by the eu fp5 project – targeting optimal use of gnss for humidity (tough, 2003-2006). recently, the e-gvap i-iii (2006-2016) aimed for the establishing operational ztd estimations and their assimilations in numerical weather models (nwm) and for developing active quality control for gnss products. additional effort on enhanced capability of gnss troposphere monitoring and the exploitation of nwm data for gnss positioning has been recently prepared and approved within the cost action es1206 – gnss for severe weather events and climate (gnss4swec, 2013-2017). the geodetic observatory pecný (gop) analysis centre has contributed to the above projects since 2000 and provided one of the first operational gps tropospheric products near real-time regional gps solution available officially since 2001 (douša, 2001). additional tropospheric products in support of meteorology have been developed at gop during recent years – near real-time regional multi-gnss product, 2011-present (douša, 2012a), first near real-time global product, 2010-present (douša 2012b) and real-time ztd product, 2012-present (douša et al., 2013). gop routine post-processing tropospheric solution has been available also since 1996 for the part of the euref permanent network (epn). the complete european network was reprocessed at gop recently for the entire period 1996-2012. as being the most accurate and homogeneous during the whole interval, the reprocessing could be used in regional studies for climatology. the regular and long-term evaluation of all these products is an important task for both getting a relevant feedback about the accuracy and studying potential for improvements. initial comparison of gop tropospheric products was done occasionally (douša, 2003) using geoinformatics fce ctu 10, 2013 39 douša j., győri, g.: database for tropospheric product evaluation perl scripts for data stored in internal text format. with increased data period such design was recognized as inconvenient and it was replaced by a simple mysql database used during 2002-2010 for gop gnss-based zenith total delays routine evaluations. recently, more flexible database structure was requested for fully automating tropospheric data comparisons including the searching of nearby points, filtering, converting, interpolating, generating various statistics and their extracting for web-based plots. new database (labelled as ’gop tropo database’) was required to provide a high-performance system in order to deal with billions of data records. as a free alternative to the enterprise solutions, the postgresql server (posgresql) was selected for this task and the database structure was completely revised in order to support its flexible utilization. the gop tropo database structure and functionality design is described in section 2, the optimization aspects in section 3 and initial comparison examples in section 4. the conclusion and outlook is given in the last section. 2. database design this section provides a rough description of the data structure with the focus on specific details. the data organization is the most important aspect of any database since it predefines any future utilization (and its flexibility for extensions). the main gop tropo database features and functionalities were initially defined as well as requirements for future developments to: • accommodate and compare different tropospheric data types in a single database, • provide geo-referencing and collocated point searching for data comparisons and interpolations, • apply vertical corrections (potentially supported with geoid and orography models), • generate comparison differences and statistics in a yearly, monthly, weekly and hourly mode, • support conversion data types (e.g. zenith wet delay to integrated water vapour and others in future), • interpolate values from grid points and calculate grid points from available data (in future), • study trends, temporal/spatial variations, correlations (in future). target and potential utilizations are foreseen such as a) to compare (near) real-time gnss tropospheric products with respect to the final ztd products, b) to assess gnss results with respect to radio soundings, radiometers or numerical weather models and other independent datasets, c) to compare troposphere estimates from different space geodetic techniques (gnss, vlbi, doris), d) to evaluate different strategies of interpolations or kriging of meteorological data or tropospheric delays, e) to evaluate in-situ meteorological observations provided in mrinex, f) to evaluate global pressure, temperature or specific tropospheric models in future. 2.1. database structure in any relational database the user data are placed in database tables, which structure is designed optimally for the purpose of the utilization. the data are organized as records (repgeoinformatics fce ctu 10, 2013 40 douša j., győri, g.: database for tropospheric product evaluation figure 1: basic structure of gop tropo database including original data representation. resented by table rows) while each record has specific values (represented by table columns). the data should not be duplicated and, when any duplication occurs, new database table with relevant records is created to unify any dualities. such unique record is then related to the original table record by using the relation provided with a reference key (that’s why such database is called relational). the gop tropo database is designed to accommodate different tropospheric meteo data types such as: gnss, vlbi, doris, radiosondes, radiometers, synoptic data, in-situ meteorological data, data extracted from the nwm and other supporting data/models as e.g. geoid, orography. according to the variability of tropospheric products, meteorological data or other supporting data, it would not be easy and efficient to accommodate them within a single data table, because each data source has its own specific stored values. fortunately, data from each source could be processed independently and it was identified as useful to define a specific data table for each data type. different data sources for all data types are also considered, e.g. such as the gnss ztds from euref, e-gvap analysis centres, the international gnss service and many others. additionally, for a single analysis centre, e.g. gop, various products are available too, such as final, reprocessed, near real-time (global, gps, gps+glonass), real-time or others. all the sources within a single data type are accommodated within a single table providing unique source identification for the data. the common structure of the data organization within the database is shown in fig. 1. all data are georeferenced according to the reference key to the tpoint table (’t’ is always used to identify the table name), which provides additional information about the data source (tsource), site identification (tsite) and point location. optionally geoid undulation and orography height is provided too. each record in the tpoint table is uniquely defined by its name, source and position (latitude, longitude, height) along with the position accuracy used geoinformatics fce ctu 10, 2013 41 douša j., győri, g.: database for tropospheric product evaluation table 1: list of existing input filters (perl) for data decoding and inserting in database input filter input format procedure remarks tro-snx2db.pl tropo-sinex finsertgnss ztds, igs/euref products bsw-trp2db.pl bernese trp finsertgnss ztds, gop products cost-trp2db.pl cost-716 finsertgnss ztds, e-gvap rt-flt2db.pl tefnut output finsertgnss ztds, gop real-time analysis raobs2db.pl badc profiles finsertraobs integrated data, radiosondes wvr2db.pl radiometrics finsertwvr integrated data, radiometers met-rnx2db.pl meteo rinex finsertinsit in-situ meteo data (gnss) cost-met2db.pl cost meteo finsertsynop cost 716 meteorological data in identification of a unique point. specific data tables are currently predefined for the data types – tgnss, tvlbi, tdoris, traobs, twvr, tinsit, tuser, tgeoid, tsurf and others where the name can help to identify the data content. others specific data could be completed later, such as for mapping functions coefficients, slant tropospheric delays, synoptic data, nwp grid data (or more likely their reduction to the specific parameters at a reference surface) and other. 2.2. database feeding, record uniqueness the database filling is done in three steps: 1) data download, 2) decoding and converting original data including data preparation for sql command calling a specific database insert procedure and 3) executing sql command within gop tropo database. input data are collected from various sources via the standard ftp or http downloads to a local disk, all in original and usually compressed formats. this process is controlled via unix cron job scheduler. the data decoding and filtering is done by in-house developed input filters written in the perl scripting langue. their main tasks consist of the extracting and converting of data from text files and preparing (and executing) sql commands calling the gop tropo database insert procedure. the input filters are designed for various input formats (e.g. for tgnss it is tropo-sinex format, bernese ’trp’ output, cost-716 format) and also specific database insert procedure is called for each data types (e.g. tgnss uses finsertgnss procedure). the list of supporting formats and input filters is given in tab. 1. the radio sounding profile is the example of data type, content of which is not fully included into the database, but only selected parameters derived at the surface (e.g. pressure, temperature, water vapour pressure, zenith hydrostatic delay, zenith wet delay, relevant lapse rates for possible vertical conversions etc.). the data decoding and database filling is also regularly started from a cron scheduler. the internal database stored procedure for inserting is performed either via insert or update sql command. the former command supports direct inclusion, but works for new records only. the latter consists of an initial check if the record already exists in the database and, if true, it replaces it with the current data. in this context it is important to understand how records are identified as unique within data tables. database systems use geoinformatics fce ctu 10, 2013 42 douša j., győri, g.: database for tropospheric product evaluation primary keys to specify the column (or columns) that uniquely identify each record, and these can be either natural or surrogate. a natural key is the one that is composed of columns directly related to the data, while surrogate key is a specific column added to the data table only for serving as a primary key (e.g. a unique identifier, auto-incremented numerical value). in many cases we use surrogate keys, but not for the main data tables, where the primary key is designed as a unique index over two columns – epoch and tpoint. the epoch value is of the type ’timestamp without timezone’ handling commonly data and time. the point column refers to the record in the tpoint table, where the uniqueness is provided via auto-incremented numerical value. the record in the tpoint table is checked and updated automatically anytime when filling new entry into main data table. this is proceeded as follows – requested point is searched within the tpoint table and, if found, it returns the reference key and, if not, the new point is created. the uniqueness of the point records is thus managed within the point searching/inserting internal procedure (finsertpoint) taking into account the unique point characteristics: siteid, source, pointtype and location (latitude, longitude, height) within predefined point accuracy (horizontal and vertical). figure 2: scheme of the comparison within gop tropo database 2.3. data and product comparison the main functionality of the current gop tropo database implementation is the comparison of various tropospheric data and products. it consists of this sequence of steps (fig. 2): • comparison configuration (manual), • search of collocated points (with respect to the setting criteria), • generation of data differences for identified pairs, • statistics over data differences, • extraction and visualization (provided outside the database). the user configures data or product comparisons by defining two data sources from one or two data tables. the setting consists of criteria for searching pairs of collocated points – the limit for the horizontal and vertical distance of two points. if applicable, the maximum σ is set for the initial data filtering and the limit of the confidence interval is provided for the statistical procedures to detect and reject outliers. optionally, mask or explicit site list can be provided if selected stations should be compared only (implicitly all stations). individual comparison settings are stored as records in the special database table (tpairconf ) where a surrogate key is set for further referencing. geoinformatics fce ctu 10, 2013 43 douša j., győri, g.: database for tropospheric product evaluation the candidates for identifying collocated points are searched within the tpoint table for the two specified sources from the setting. the station pairs are generated by the tgeneratepair procedure and the pair list is stored in the tpair table used afterwards for data differences generation (fgeneratepairdiff ) within the period of request. compared products have usually different sampling rates and the values for comparing could be generally referred to different epochs. for this case, the database supports sampling rate argument defining the interval within which all single product values are extracted and the mean value is calculated for the comparison. an optimum difference sampling step should be set up as the higher data sampling of both products in order to grab common values and, on the other hand, below one hour since the variation of the troposphere will be significantly smoothed by averaging product values over a longer time. the setup between 10 and 60 minutes is usually reasonable. in future, we consider supporting other functional fitting of the single product values for differencing instead of a simple averaging. generated data differences are always stored in the tpairdiff table enabling to accommodate differences from recently collected data or analyzed products. the statistical procedure (fgeneratepairstat) applies three iterations to estimate biases, standard deviations and root mean squares (r.m.s.) for each compared pair individually. the first iteration serves for the median estimation as a robust initial mean value. it is used in the second step for calculating differences reduced by mean value in order to estimate reliable standard deviation for the outlier detection. in the third step, final statistics – bias, standard deviation, r.m.s., number of all observation and outliers – are calculated after outliers were rejected using the confidence interval based on standard deviations from the second iteration and, optionally, data excluding r.m.s. limit from the setting. for individual comparisons, five statistic modes are provided for a period of request (defined by ’beg’ and ’end’ arguments) – all, yearly, monthly, weekly and hourly. the first one calculates the statistics over the requested period, the last for the same period but providing statistics for data filtered according to hour of day. all other modes calculate statistics individually for windows as specified within the whole period. because the differences are saved in the tpairdiff table, all statistic modes can be efficiently repeated in a regular update for generating time-series as demonstrated in fig. 3. finally, the extractions and visualizations of results are described in the section with sample evaluations. table 2 summarizes the comparison procedures. note that each comparison procedure uses a key to the configuration record in the tpairconf as its first argument which has been omitted in the table. table 2: list of comparison procedures/functions (’f’) input/output tables (’t’) procedure arguments input output remarks fgeneratepair tpairconf tpair generate pairs fgeneratepairdiff ’beg’,’end’,’sample’ tpair,tdata tpairdiff generate differences fgeneratepairstat ’beg’,’end’,’type’ tpairstat generate statistics 2.4. other procedures for data maintenance since data are structured using various database tables, any of the removing, viewing, extracting and statistic operations require more complex sql commands, which are implemented as specific stored procedures in gop tropo database. according to the relationships between geoinformatics fce ctu 10, 2013 44 douša j., győri, g.: database for tropospheric product evaluation table 3: list of other procedures for database content maintenance procedure arguments category remarks vpoint ’source’,’site’,’description’ view list filtered point list vsource ’source’ view list filtered source list vsitel ’source’,’site’,’description’ view list filtered site list vdata ’source’,’site’,’description’ view list filtered data vdiff ’source’,’site’,’description’ view list filtered differences fdatainfo info information on data table fpairinfo info information on pair tables fstatinfo info information on statistics tables fdeletepoint ’source’,’site’,’description’ delete remove selected data and site fdeletesite delete remove unreferenced sites fdeletepair ’source’,’site’,’description’ delete remove pairs and differences records in various data tables, such procedures combine data for a transparent and userfriendly output. additionally, for easy data selection, three common arguments for name or mask input (via a simple regular expression using asterix) are supported for most common maintenance procedures. table 3 provides an incomplete list of maintenance procedures with respect to three category operations: view, delete or info. while view procedures are designed to extract specific data combinations from various tables, the info procedures extract general information about table contents – e.g. start/end of data, number of records etc. all these procedures are, however, only minimum implementations to simplify common queries while any other specific query can still be requested using a standard sql command. on the contrary, delete procedures could be very tricky, in particular due to a possibility to lose a consistency within tables and their relations. any removal is implicitly driven by the ’delete cascade’ attribute used for most tables. the attribute specifies that when referenced row is deleted, row(s) referencing it should be automatically deleted as well. however, the cleaning of any specific point-/pair-related data or differences should be done with the relevant delete procedure. for cleaning sites in the tsite table which are not anymore referenced a specific stored procedure exists too. table 4: list of selected important settings from postgresql.conf configuration file name of variable value shared_buffer 3000 mb work_mem 512 mb max_connections 10 maintenance_work_mem 256 mb default_statistic_target 300 effective_cache_size 5000 mb constraint_exclusion partition geoinformatics fce ctu 10, 2013 45 douša j., győri, g.: database for tropospheric product evaluation 3. database optimization the database is running on a dedicated server with gnu linux operating system debian 6.0.7. and, currently, has reserved 12gb memory and 8 thread 64-bit intel(r) xeon(r) cpu. this hardware configuration is sufficient for current operations. however, more than 750 millions of records are stored in each of the largest tables – tgnss and traobs – and there are more than one billion records total in the database, utilizing almost 100gb of hard drive space. this amount of data can cause lack of performance and rapidly decrease execution speed of some sql queries. the highest performance can be reached by optimal configuration of several parameters which are stored in postgresql.conf settings file, because they are set to extremely low values by default and, in most cases, they are not related to the available hardware configuration. for that reason, we discuss recommended and applied settings in the next paragraph. the first important parameter – work_mem – defines the maximum limit of memory which can be used for one sorting operation. the amount of memory usage increases with each additional sorting. each client connected to the same database server typically uses a maximum of two sorting operation at one time. this implies that the value of work_mem could be set to the amount of unused memory divided by the maximum number of connections while divided by two. the number of connections can be reduced in max_connections setting. when sorting, postgresql estimates first the amount of memory for possible use. if work_mem value is not high enough, the system will use swap operations along with free space on the hard-drive, decreasing the performance. the parameter shared_buffer then determines the maximum amount of memory which can be used for cached data in memory after they are read from the hard drive. the higher value, the bigger the set of data is possible to store in memory, which reduces the number of swap operations. optimal value is quite difficult to set, but 30% of the available memory is recommended for a dedicated server. the vacuum operation is one of the most important commands in postgresql database server. it is a kind of garbage collector which releases allocated space from invalid records. the vacuum operation is closely related to the analyze operation which is used to collect statistics for optimizing a server query plan. the automatic maintenance of database is provided by the autovacuum command, which is turned on by default. the maintenance_work_mem parameter in the setting then defines the amount of memory which can be used for autovacuum. the last important parameter is default_statistic_target, value of which determines the quantity of data used for statistics. the higher the value, the higher the cpu utilization generated by server. on the other hand, a higher value of default_statistic_target likely yields more precise statistics and thus possibly increase the speed of the next sql query. table 4 summarizes the important setting of the postgresql configuration file. as it has been already mentioned above, the biggest table in gop tropo database contains more than half a billion of rows, which can increase easily when new data are introduced (e.g. such as from e-gvap project). the sql queries usually work with data in specific time range, for example, generating monthly statistics between different products. the postgresql supports basic table partitioning while splitting single large table into several smaller pieces. this is done by applying an inherited scheme, which defines a parent table similarly as the original table, while data are stored in a sequence of child tables (partitions). such geoinformatics fce ctu 10, 2013 46 douša j., győri, g.: database for tropospheric product evaluation partitioning can rapidly improve system performance, in particular when the query works with data in a single partition. on the other hand, additional overheads are relevant to all ’insert’ commands calling a special trigger function when the table is partitioned. the inherited implementation guarantees that the partitioning has no direct influence to scripts or applications used within the database since data stored in child tables inherits behaviour from the empty parent table. in gop tropo database, range partitioning was applied in a yearly scheme on all large tables and each tgnss child table thus contains about 20 millions records on average. new partitions are created by specific trigger pl/sql functions called when data are inserted in the table. table 5 summarizes the execution time of computing average value from data stored in a single partition restricted by where clause. it is obvious that such query is much faster on the partitioned table than on the table without partitioning. explain analyze select avg(ztd) from tgnss as t1 where t1.epoch >= ’2007-01-01 00:00:00’ and t1.epoch < ’2008-01-01 00:01:00’ and t1.fk_point = xx; table 5: the statistics from the partitioning test via analyze function for specific (repeated) command sequence station source execution time execution time number (partitioned table) (single table) first gope euref repro1 1363 [ms] 1300 [ms] second gope euref repro1 13 48 third gope euref repro1 13 48 first albh igs repro1 47 125576 second albh igs repro1 0.1 4422 third albh igs repro1 0.1 4422 the database performance was tested on several different clusters as defined in tab. 6. the original cluster is labelled as a and to this variant additional optimization steps were applied sequentially. in the first step the original file system (ext3) was replaced with newer file system (ext4) to improve swap operations (cluster b). the postgresql setting was then revised according to that described above (cluster c). the tgnss table was divided into yearly partitions (cluster d). monthly partitioning produced a lot of tables and this variant was rejected from the comparisons. the tpairdiff table was also partitioned applying the yearly scheme (cluster e). the comparison averaging interval was decreased from 60 to 10 minutes (cluster f). the postgresql was updated from 8.4 to recently newest 9.2 version (cluster g) with applying original settings and, finally, the settings was tuned also in postgresql 9.2 in a similar way as for the version 8.4 (cluster h). table 6 summarizes the settings and performance of tested clusters. the statistics clearly show the importance of postgresql configuration tuning, which improved overall performance by a factor of 2-4, in particular for all highly time-consuming procedures (e.g. inserting). that is true for the old as well as the new system version. the new file system also did not provide any improvements as expected, but even a slight degradation. in this general performance test, partitioning did not cause an increased performance when sets of procedures applying more complex sql queries were used. this was not expected, but in general we decided to geoinformatics fce ctu 10, 2013 47 douša j., győri, g.: database for tropospheric product evaluation keep partitioning since only very small degradations and overhead costs were found. further benefit can be expected for new postgresql releases as well as in with growth of the data in database. the latter was the primary argument for preserving partitioned tables for large datasets. table 6: variant settings of the database optimization and their performance cluster file partition postgresql compared insert total diff system data/diff vers/tuning samples [s] [s] [s] a ext3 1/1 8.4 / no 60 min 4712 1130 835 b ext4 1/1 8.4 / no 60 min 5050 1361 864 c ext4 1/1 8.4 / yes 60 min 1348 944 660 d ext4 yearly/1 8.4 / yes 60 min 1663 1324 1033 e ext4 yearly / yearly 8.4 / yes 60 min 1653 1377 1079 f ext4 yearly / yearly 8.4 / yes 10 min 1637 7855 7563 g ext4 yearly / yearly 9.2 / no 60 min 5759 1568 1228 h ext4 yearly / yearly 9.2 / yes 60 min 1739 1554 1207 4. evaluation examples in order to demonstrate the initial database functionality, we provide several samples from routine evaluations with a focus on gnss tropospheric products comparisons. the following products were used in the demonstration figures: • igs operational and re-processed (repro1) tropospheric products (byram, 2012), • euref combined tropospheric results from the operational and re-processed (repro1) solutions (soehne and weber, 2005), • gop global near real-time tropospheric product (douša 2012b), • gop near real-time gps and multi-gnss(gps+glonass) tropospheric products (douša, 2012a). we do not intend to go into details in the examples and we do not thus discuss details about product differences, e.g. applied software, processing strategy, models, constraints, precise products and many others affecting the ztd solution. it is out of the scope of this paper to study effects of various changes clearly visible in statistics. sample comparisons demostrate the calculated biases and standard deviations, either for the entire period or within period split into regular discrete intervals (e.g. years, months, weeks). the statistic results are provided with the database procedures for predefined configurations and the results are extracted from the database and visualized with gnuplot (williams and kelley, 2011) or gmt (wessel and smith, 1998) plotting tools. various figures are generated – total bias and standard deviations for the whole period and all common stations, the geographical distribution of the values or time-series of mean statistics over all stations from the comparison. the samples are given in figs. 3-6. figure 3 shows the assessment of the gop near real-time multi-gnss ztd solution with respect to the post-processed euref combined ztd product over three months in 2011. for geoinformatics fce ctu 10, 2013 48 douša j., győri, g.: database for tropospheric product evaluation figure 3: example total bias and standard deviations for gop gps and gnss ztd near real-time solutions with respect to euref combined ztd solution (three months in 2011). the circles indicates multi-gnss stations. figure 4: weekly mean and its r.m.s. of ztd biases and standard deviations from all stations each station, which was identified as common to both products, the bias and standard deviation is calculated and plotted. such plot provides information about the internal accuracy of gnss ztd products on a station by station basis. this is useful to assess a consistency of various strategies (and models) for comparison pairs when different product timeliness, input products, gps or multi-gnss observations and others are used. figure 4 and 5 show time-series of a long-term comparison of two products on a weekly and monthly intervals, respectively. the biases and standard deviations were calculated as mean values over all common stations for each interval individually so that any effect common to all stations can be visualized. additionally, r.m.s. of such mean is calculated and plotted. figure 4 compares historical and homogeneously reprocessed igs ztd products. the evolution of models and strategies within the historical product are clearly seen when compared to the re-processed ztds using the same strategy and models over entire period. figure 5 compares two different re-processing products – igs (global) and euref (regional) while common stations in europe could be compared only. although both products are geoinformatics fce ctu 10, 2013 49 douša j., győri, g.: database for tropospheric product evaluation figure 5: monthly mean and its r.m.s. of ztd biases and standard deviations from all stations consistent over all time, the standard deviation clearly shows improvement in time, which can be attributed to the steadily increasing quality of data and products when more permanent stations in global and european networks are involved. finally, figure 6 shows a comparison of igs final and igs repro 1 ztd values for all dates covered by repro 1. (whereas fig. 4 compared igs final and igs repro 1 ztd estimates according to date, fig. 6 compares them according to site.) the ztd standard deviation is typically lower (about 2 mm) at low north hemisphere as well as in low latitudes in general. the effect of isolated stations, e.g. in central asia, africa and oceans, could be easily identified from the figure with standard deviations up to 4-5 mm. this can be attributed to the effect figure 6: displays geographical representation of ztd standard deviations from two igs global solutions geoinformatics fce ctu 10, 2013 50 douša j., győri, g.: database for tropospheric product evaluation of lower accuracy of precise global orbit and clock products due to the lack of contributing stations, in particular during 90th. 5. summary and outlook the developments of the highly performance gop tropo database for the evaluation of tropospheric data and products were described with a special focus on its implementation aspects. the structure was designed in a flexible way to fulfil requirements specified in the introduction. although the initial and primary motivation aimed for routine comparisons and evaluation of gnss tropospheric products, current implementation functionality already goes beyond this scope. a special effort was given to the database optimization to support billions of records, which is already easy to achieve with several products. the optimization shows the need for revision of postgresql default settings which could improve the overall performance by a factor of 2-4. although careful partitioning did not improve performance, it was decided to keep it for the future since it is expected to become very important in handling huge quantities of data. finally, samples of results were provided in order to demonstrate currently implemented functionality on selected interesting examples. acknowledgements the database development was supported by the czech science foundation (no. p209/12/2207). the authors also thank to dr. christine hackman from u.s. naval observatory and two anonymous reviewers for useful comments improving the manuscript. references [1] byram, s., hackman, c. and tracey, j. 2012. computation of a high-precision gpsbased troposphere product by the usno. proceedings of the 24th international technical meeting of the satellite division of the institute of navigation (ion gnss 2011). 2012, portland, or, september 2011, pp. 572-578. [2] douša, j. (2001): towards an operational near-real time precipitable water vapor estimation, physics and chemistry of the earth, part a, 26/3, pp. 189-194. [3] douša, j. (2003): evaluation of tropospheric parameters estimated in various routine analyses, physics and chemistry of the earth, 29/2-3, pp. 167-175. [4] douša, j. and g.v. bennitt (2012): estimation and evaluation of hourly updated global gps zenith total delays over ten months, gps solutions, springer, online-first. [5] douša, j. (2012): development of the glonass ultra-rapid orbit determination at geodetic observatory pecný, in: geodesy of planet earth, s. kenyon, m.c. pacino, u. marti (eds.), international association of geodesy symposia, vol 136, pp.1029-1036. [6] douša, j., válavovic, p. and gyori, g. 2013. development of real-time gnss ztd products. presentation at the egu 2013 general assembly, april 7-12, 2013. [7] williams, t. and c. kelley (2011). gnuplot 4.5: an interactive plotting program. url: http://www.gnuplot.info. [8] postgresql, http://www.postgresql.org/docs/8.4/static/reference.html. geoinformatics fce ctu 10, 2013 51 http://www.gnuplot.info http://www.postgresql.org/docs/8.4/static/reference.html douša j., győri, g.: database for tropospheric product evaluation [9] soehne, w. and g. weber (2005): status report of the epn special project “troposphere parameter estimation”, euref publication no. 15, mitteilungen des bundesamtes fuer kartographie und geodaesie, vienna, austria, band 38, pp. 79-82. [10] wessel, p. and w.h.f. smith (1998): new improved version of the generic mapping tools released, eos tans. agu, 79, 579. geoinformatics fce ctu 10, 2013 52 on scripting grass gis: building location-independent command line tools peter loewe rapideye ag, molkenmarkt 30, 14476 brandenburg an der havel, germany loewe@rapideye.de keywords: grass gis, grass scripting, foss gis workflows, embedded gis, bash, python abstract this paper discusses scripting techniques within the context of grass gis. after an overview over scripting for interactive grass sessions, it is shown how grass gis-provided functionality can be used for external applications. this approach of external scripting allows for the application of grass gis-based functionality to be used for standalone applications and embedding in larger automated workflows. on scripting what’s in a grass script scripting allows to to re-use workflows which have been previously developed in interactive sessions of a user (expert) and a gis by automatization. the geographic resources analysis support system (grass) gis, a free and open source (foss) application consists of a variety of independent software modules, each of them providing a unique gis processing capability. during an interactive grass session, the grass modules are applied to the designated set of geodata (”grass location”) by the user (expert). so a script in grass gis is a control structure which orchestrates the execution of underlying grass gis modules. the control structure is wrapped around the grass modules and accesses them via their defined interfaces. examples are provided in [1] and [2]. scripts are an integral part of grass gis: a significant amount of the available grass gis functionality is actually provided by scripts, instead of modules consisting of compiled binary code. the number of grass modules which are scripts can be checked by examining the ”scripts” directory of a grass gis distribution. an overview over currently available grass scripts beyond the scope of the standard grass gis distribution is available in [3]. geinformatics fce ctu 2009 55 loewe p.: on scripting grass gis: building location-independent command line tools new grass functionality can most effectively be implemented on sourcecode level, using the grass libraries and c-api. however, this approach requires a good understanding of the internals of grass gis. by comparison, scripting allows for fast prototyping and does not require source-code-level knowledge. the downside is a generally slower execution speed than compiled c-code. kinds of grass scripts on a high level, grass scripts fall in two general categories: module-scripts, like those included in every grass gis distribution, provide additional gis functionality to the user by combining multiple pre-existing grass modules. the other category are workflow-scripts. these provide for bulk tasks like the repeated use of processing chains (i.e. ”apply the same computation on [1 .. n] data layers”). one exception to the rule is the g.mremove script [4], which falls in both categories: being part of the standard grass distribution makes it a module script. but it is also a workflowscript, as it allows for bulk removal of data layers, instead of the repeated use of the module g.remove. languages for scripting while unix-shells and perl have been previously the most widely used programming languages for scripting, python has been recently adopted as the preferred object-oriented scripting language by the grass community: starting with grass 7.0, all scripts included in the standard grass gis distributions are coded in python. add-on scripts provided by the user community continue to be formulated in various programming languages [3]. for the examples given in this paper, the bourne again shell (bash) is still used. the code can be easily translated into python commands. the joy of boilerplating the effort to create a script is only justified by adequate savings in time, effort and ease when re-using the workflow. for this, two factors are crucial: obviously, the primary requirement is the correct implementation of the algorithm, but equally important are standard-conforming ”friendly” user interfaces. grass started out 25 years ago as a system to be accessed via a (unix)shell as a command line interface (cli). during the evolution of grass, graphical user interfaces (gui) have been added to allow interaction via graphical icons and visual indicators. therefore, grass scripts should support support both means of interaction (cli and gui). this can be easily provided by the g.parser module [5]. it provides standardized input/output interfaces and help-page templates, both for the use of the module of a cli and grass guis. this approach is colloquially referred to as ”boilerplating”, as it provides a convenient means to create a high-quality front-end for the users with little effort. further, a script can also be started by a mouseclick if it is integrated in an overall grass gui-manager (like tcl/tk or wxpython) by including it to the gui configuration scripts. geinformatics fce ctu 2009 56 loewe p.: on scripting grass gis: building location-independent command line tools automating grass gis apart from applying grass scripts during interactive grass sessions, it is possible to automatically deploy grass scripts without user interaction. this allows for automated grass-based processing without the need for user interaction. during the startup of a grass session, the grass environment has to be linked to the current geodata work environment (grass term: ’location’) and its metadata (i.e. projection). this information is defined by a set of environment variables. the setting of these shell variables is automated, but can also be done manually or within scripts. the latter technique will be put to use in the external scripting section. scripted grass sessions since the release of grass 6.3, a grass session can be automatically started to execute a preselected script: after setting a specific environment variable (i.e. grass batch job) [9] to the path of an external grass script, this script will be executed when the next grass session is started, implicitly assuming the availability of a grass location. this approach can not be used in situations when no grass locations have been set up before. external scripting external scripting invokes grass functionality outside of an interactive or scripted grass session. this allows to create stand-alone scripts and has been available since grass version 5.x : ’once a minimum number of environment variables have been set, grass commands can be integrated into shell, cgi, perl, php and other scripts’ [2, p.290]. initially, his approach appears less convenient than variable-controlled scripted grass sessions. however, it enables full control over the location settings. this allows to start computations literally ’from scratch’, creating grass locations in the process. example: external scripting the following example showcases the necessary steps required for an external script to initialize a grass session. it demonstrates how to proceed from an unprojected location to a projected one. external grass initialisation to invoke grass without a proper location available in the filesystem, a temporary mock-up has to be created. this is done in a user-defined folder, by setting up the following directory structure: on the highest level, the location directory ($the location), provides metadata and projection information. for each explicitly named region of interest (grass term: mapset), a subfolder is created within the location-directory. as each grass location is required to contain one mapset named permenent, this is the recommended name for the intitial mapset to be created. in the following code fragment, the directories are referred to by the variables $the location and $the mapset while the variable geinformatics fce ctu 2009 57 loewe p.: on scripting grass gis: building location-independent command line tools $grass dbase example refers to the temporary directory. in addition to the described folder stucture, a file is required by grass gis to contain the location’s metadata. this ascii file, by default named ’wind’, is set up for an location lacking projection information (’proj:0’), defining an area of singular extent. these settings serve as placeholders to be updated during the following steps. the file is also copied into a second instance (’default wind’). in a last step, the database driver is defined by creating the file ’var’. # create a wind file with minimal information and no projection: echo "proj: 0 zone: 0 north: 1 south: 0 east: 1 west: 0 cols: 1 rows: 1 e-w resol: 1 n-s resol: 1 top: 1 bottom: 0 cols3: 1 rows3: 1 depths: 1 e-w resol3: 1 n-s resol3: 1 t-b resol: 1 " > ${grass_dbase_example}/$the_location/$the_mapset/wind # copy wind-file to default_wind: cp ${grass_dbase_example}/$the_location/$the_mapset/wind \ ${grass_dbase_example}/$the_location/$the_mapset/default_wind # set default database driver: echo "dbf_driver: dbf db_database : $gisdbase/$location_name/$mapset/dbf/ " > ${grass_dbase_example}/$the_location/$the_mapset/var [code snippet 1: first step of grass gis initialization] when grass is started without a reference to a specific location, it attempts to refer to the last previously used location. this mechanism is exploited to point grass to the mock-up location. the following code snippet demonstrates how the required references are stored in an ascii file. the standard file used for this is named ’.grassrc6’ (for grass versions 6.x) but in the mock-up case, any arbitrary name can be used because the filename is stored in a shell variable in the last step. echo "gisdbase: ${grass_dbase_example} location_name: $the_location mapset: $the_mapset " > ${grass_dbase_example}/$the_grassrc [code snippet 2: second step of grass gis initialization: creation of the grassrc-file] as a final step, the shell variables required for grass have to be set to match the newly created directories and files. # $gisbase points to the grass installation to be used: export gisbase=/opt/grass # extend $path for the default grass scripts: export path=$path:$gisbase/bin:$gisbase/scripts geinformatics fce ctu 2009 58 loewe p.: on scripting grass gis: building location-independent command line tools # add grass library information: export ld_library_path=$ld_library_path:$gisbase/lib # use process id (pid) as lock file number: export gis_lock=$$ # backup previously existing grass references # export gisrc_backup=${gisrc} # path to grass settings file: export gisrc=${grass_dbase_example}/$the_grassrc db.connect driver=dbf database=’$gisdbase/$location_name/$mapset/dbf/’ [code snippet 2: third step of grass gis initialization: export of paths and settings] at this point, a fully operational grass gis environment has been established within the script. auto-generating a location from external data after the initialization stage, grass modules can be used within the mock-up location. geodata can be imported, inluding the generation of additional locations based on the projecton information derived from the input data. the grass module r.in.gdal [6] is used for raster data while the module v.in.ogr [10] is used for vector import. # read the raster data stored in the geotiff "example.tif" in directory /tmp # into to the layer ’raster_layer’ in the new location ’raster_location’ r.in.gdal -e input=/tmp/example.tif output=raster_layer location=raster_location # read the line-vectors stored in the shapefile "example.shp" in directory /tmp # into to the layer ’vector_layer’ in the new location ’vector_location’ v.in.ogr -e dsn=/tmp output=vector_layer layer=example.shp type=line location=vector_location switching over to the new location is achieved by setting the shell variables accordingly. by default, the data is stored in the newly created location within the mapset ’permanent’. if the external data is lacking proper projection information (i.e. a shapefile is provided without a .prj-file), the approach described in the next section can be used to explictly define the projection. projection changes within a script using the module g.proj [15] allows to advance from the initial mock-up location to a projected location: the necessary parameters can either be provided to the g.proj module or a european petroleum survey group (epsg) [13] numeric code can be used. it must be noted that g.proj will only work on the permanent mapset of a location. the example uses the epsg code ”4326”, which refers to geographical coordinates using the wgs84 ellipsoid: # override the current projection setting g.proj --quiet -c epsg=4326 # define the extent of the region of interest g.region --quiet -s n=90 s=-90 w=-180 e=180 res=1 [code snippet 4: the usage of g.proj to override the current projection with a epsg code] geinformatics fce ctu 2009 59 loewe p.: on scripting grass gis: building location-independent command line tools shifting the focus between multiple locations after a projected location containing data has been established, either by definition in the script or external data, it might become necessary to write out data in another projection. for this, it is necessary to transfer the results into an additional location in the output projection, using r.out.gdal [7] and v.out.ogr [11] to finally export the actual data. the export location is created as described before: the sequence of defining a gisrc-file pointing to the location and mapset, exporting the gis lock variable and finally pointing the gisrc-variable to the new gisrc-file is repeated, thereby changing the focus of the grass session to the addedd location. once this has been accomplished, the reprojection modules r.proj [8] and v.proj [12] can be used to transfer the data into the location. after providing the required output data products, all locations can be safely removed from the file system. care should be taken to restore the cached value of the variable $grassrc for upcoming interactive grass sesssions. fields of applications the approach of external scripting of grass gis is beneficial for all repetitive processing tasks without the need of user interaction. such tasks can be either stand alone applications or part of larger workflows, where geospatial processing is only a subtask. grass scripting also provides alternatives to the set-up of up open web services (ows), like web mapping service (wms) or web processing service (wps) to provide automated geospatial map-products or to do geoprocessing. such gis-produced maps can be used to regularly update webpages. a classic example by neteler [14] provides a global map of earthquake epicenters. this is accomplished by importing and processing external earthquake information in a grass-based externally scripted workflow. another noteworthy aspect about using automated scripts is the option to reduce the functionality and thereby to minimize the footprint of the gis: the footprint of a contemporary ”off the shelf” grass gis installation on the filesystem is about 50mb. if grass functionality is implemented in an automated workflow, grass can be reduced to exactly those modules which are required in the workflow. all other grass modules would be a waste of potentially critical ressources. this approach is especially useful when implementing workflows on embedded systems, such as environmental sensors with limited system ressources. conclusion this paper provided a technical overview on the current options to deploy scripts based on grass modules. it was demonstrated how inherent grass gis-functionality can be wrapped in scripts to be applied beyond the scope of an interactive grass gis session. beyond the technical challenge to ”think script”, the significance of being able to provide ready-to-use tools to a wide audience is emphasized: the empowering of audiences lacking geinformatics fce ctu 2009 60 loewe p.: on scripting grass gis: building location-independent command line tools previous foss gis exposure to perform challenging geospatial tasks (while hiding the complexities of the actual grass gis solution) is expected to broaden the user base and overall impact of free and open source softare (foss) gis applications. references 1. löwe p: niederschlagserosivität: eine fallstudie aus südafrika, basierend auf wetterradar und open source gis, vdm verlag, 2008, isbn 978-3-8364-5018-8 2. neteler, m. and mitášová h.: open source gis: a grass gis approach, kluwer academic publishers group, 2002, isbn 1-4020-7088-8 3. grass addons http://grass.osgeo.org/wiki/grass addons 4. grass gis module to remove data base element files from the user’s current mapset. http://grass.fbk.eu/grass64/manuals/html64 user/g.mremove.html 5. grass module to provide canonic interfaces for scripts http://grass.itc.it/grass64/manuals/html64 user/g.parser.html 6. raster data import http://grass.itc.it/grass64/manuals/html64 user/r.in.gdal.html 7. raster data export http://grass.itc.it/grass64/manuals/html64 user/r.out.gdal.html 8. reprojection of raster data from other grass locations with differing projection http://grass.itc.it/grass64/manuals/html64 user/r.proj.html 9. overview over grass environment variables and setting options http://grass.itc.it/grass64/manuals/html64 user/variables.html 10. vector data import http://grass.itc.it/grass64/manuals/html64 user/v.in.ogr.html 11. vector data export http://grass.itc.it/grass64/manuals/html64 user/v.out.ogr.html 12. reprojection of vector data from other grass locations with differing projection http://grass.itc.it/grass64/manuals/html64 user/v.proj.html 13. overview over epsg codes http://www.epsg.org/geodetic.html 14. automated earthquake map based on grass gis http://grass.itc.it/spearfish/php grass earthquakes.php 15. grass module to manipulate projection settings http://www.grass.itc.it/grass64/manuals/html64 user/g.proj.html geinformatics fce ctu 2009 61 http://grass.osgeo.org/wiki/grass_addons http://grass.fbk.eu/grass64/manuals/html64_user/g.mremove.html http://grass.itc.it/grass64/manuals/html64_user/g.parser.html http://grass.itc.it/grass64/manuals/html64_user/r.in.gdal.html http://grass.itc.it/grass64/manuals/html64_user/r.out.gdal.html http://grass.itc.it/grass64/manuals/html64_user/r.proj.html http://grass.itc.it/grass64/manuals/html64_user/variables.html http://grass.itc.it/grass64/manuals/html64_user/v.in.ogr.html http://grass.itc.it/grass64/manuals/html64_user/v.out.ogr.html http://grass.itc.it/grass64/manuals/html64_user/v.proj.html http://www.epsg.org/geodetic.html http://grass.itc.it/spearfish/php_grass_earthquakes.php http://www.grass.itc.it/grass64/manuals/html64_user/g.proj.html geinformatics fce ctu 2009 62 toolbar icons for gis applications robert szczepanek institute of water engineering and water management, cracow university of technology robert.szczepanek iigw.pl keywords: icon, gis, usability, gui abstract graphical user interface is an important element of today software. discussion on design aspects of toolbar icons is presented. three concepts related to gis applications are proposed. preliminary icon set gis-0.1 oriented to usability and simplicity is outlined. introduction graphical user interfaces (gui) become standard element of desktop applications. toolbar icons are probably the most frequently used elements of gui. some of them are universal (fig.1), some are commonly used in certain domain (fig.2) and some are application specific (fig.3). fig.1 universal icons fig.2 domain specific icons – gis fig.3 application specific icons – qgis gis applications are different and have different interfaces. this is good, because we like diversity. the philosophy and implementation of gis functions is different among applications. but do they really should use different symbols for the same objects and actions? why traffic signs are (almost) the same among different countries? shouldn’t we try the same in our domain? geinformatics fce ctu 2008 79 toolbar icons for gis applications if you feel familiar with gis applications try a short quiz1 by karsten berlin at [1]. as will be shown later, even simply icons like import and export can be misunderstood. my proposal is towards icons lerning curve shifting from application specific group to domain one (fig.4). this is more matter of symbology, not final visual implementation, so every gis application can keep its identity untouched. i don’t intend to present ”the only right” solution, rather present my voice in discussion. fig.4 icon learning curve behind the scene – meaning of words and symbols lets start from very beginning. analyzing different application i found that simple operations like add, new and create are treated as synonyms and often mixed in any combination. is it correct? according to definitions in table 1 not exactly. we can treat new and create as synonyms, but create is an action, while new isn’t. they are both related to object that didn’t exist, while add is used for operation on existing objects. so there are two basic actions. create when we bring into existence. for example create layer in the sense of creation of new layer. add when we put existing object into some group. for example add layer to composition/group of layers. looking at object’s death (tab.2) we find more serious existential problems. the first problem is that we have cross-definition. erase is defined by delete and remove, while delete by erase and remove (underlined). delete and remove seems to be simpler cases. removed objects after this operation still exist. we only change their properties. so it can be treated as reverse operation to adding. delete operation results in annihilation of object. erasing can be used in both context, so should be avoided or used only in sense of 1 http://www.karsten-berlin.net/gisusability.php?top=games geinformatics fce ctu 2008 80 http://www.karsten-berlin.net/gisusability.php?top=games toolbar icons for gis applications http://www.merriamwebster.com http://www.thefreedictionary.com add verb 1: to join or unite so as to bring about an increase or improvement 4: to include as a member of a group to join or unite so as to increase in size, quantity, quality, or scope new adjective having recently come into existence having been made or come into being only a short time ago; recent create verb to bring into existence to cause to come into existence table 1 – meaning of words: add, new and create. http://www.merriamwebster.com http://www.thefreedictionary.com erase verb 1 a: to rub or scrape out (as written, painted, or engraved letters) d: to delete from a computer storage device to remove (recorded material) from a magnetic tape or other storage medium delete verb to eliminate especially by blotting out, cutting out, or erasing to remove by striking out or canceling remove verb to change the location, position to move from a place table 2 – meaning of words: erase, delete and remove. object cleaning without annihilation. finally we get the following antonyms: create – delete, add – remove. how this is related to visual representation we can check in table 3. results are based on google picture search mechanism. first 100 hits of search were generalized. this method is neither representative nor objective, but gives a rough picture on how different actions are visualized. add 54 1 new 4 4 9 create 7 3 4 erase – 2 14 3 delete 4 58 19 remove 15 31 1 4 table 3 – basic action icons representation based on first 100 hits in google picture search. the most unambiguous sings are corresponding to add action, and corresponding to delete. both are very universal and have no connotation with any specific object. for creation geinformatics fce ctu 2008 81 http://www.merriam-webster.com http://www.merriam-webster.com http://www.thefreedictionary.com http://www.merriam-webster.com http://www.merriam-webster.com http://www.thefreedictionary.com toolbar icons for gis applications action i would recommend sign because is less neutral. remove action is identified with sign but at the same time this sign is better known as delete action, so we take second the most frequent, sign. for erase action we have sign, which is not neutral. unfortunately not better sign was found yet. finally, we get the following set of signs: create delete add remove erase (hopefully to be replaced in the future by more neutral sign) toolbar icons from gis application perspective icons in toolbars are used as comfortable shortcuts to commands. good icon should be unambiguous and easy to remember [3]. apart of artistic and visual aspects, there are also some technical issues in icon design. size due to limited area for toolbars and number of potential icons in application, one of critical elements is icon size. icon size determines its recognizability, so we can’t make it too small. but available workspace is also limited and depends on standard display resolution, which changes constantly. so icon size is compromise between screen resolutions, our perception capabilities and available space within application. usually set of icons with different sizes is prepared. depending on icon size different levels of detail are visualized. suggested for windows toolbar icon sizes are 16x16, 24x24 and 32x32 pixels [2][7]. in microsoft’s recommendations we can read that for this size of icon simplification is recommended. so we forget about photorealistic pictures. gis and cad applications run usually on big monitors, so 16x16 pixels icons are really small ones. two following two sizes are thus to be considered as basic. perspective, lights and shadows toolbar icons should be always flat, not 3d, even at the 32x32 size [7]. in some cases this is difficult to achieve. one of such symbols is layer, which will be discussed later. according microsoft suggestions, for flat icon lighting comes from the upper-left at 130 degrees and parallel light rays produce shadows that all have the same length and density. however use of shadows in icons at 24x24 or smaller size is not recommended [5][7]. colors in interface design, color is often overused. one of the most important points is that color table must be consistent, so aggressive colors close to pastel ones doesn’t look good. color geinformatics fce ctu 2008 82 toolbar icons for gis applications is often used to communicate status. the interpretation of red, yellow, and green for status is consistent globally [7]. however, color should not be used as primary medium of message. there are different methods to utilize saturation or hue to reinforce icons message. are also other methods to play with visual effect, like gradients making picture more realistic. toolbar icons should not use colors and design similar to other elements of interface, e.g. warning alerts [3]. file format and naming conventions icon for toolbar can be saved in many different formats. the most popular is still raster, but vector format seems to take this place in near future. when drawing icon usually transparency is needed. transparency can have 256 levels in 8bit alpha channel file formats (png, tiff) or 2 levels in 1-bit case (gif, png) when one color is selected as transparent. this transparent color should be chosen carefully. the most popular and safe color is magenta (#ff00ff). from raster formats png seems to be the most suitable, and from vector formats svg. presently, the complete procedure of icon design is the following: � paper and pencil – initial concept, sketch � vector program – primary, scalable digital version � raster program – final raster version some designers skip first or even first two steps. to make raster icons from vector file is not so straightforward, and for smaller icons picture have to be generalized. also simply downscaling from big raster icons to smaller size doesn’t work [7]. simple and consistent naming convention of icon files can be advantageous. good example of such consistency can be quantum gis (qgis): � mactionaddrasterlayer.png – for adding action on raster layer � grass add vertex.png – for grass modules icon as message what makes an icon – shape, content, color? all mentioned elements are important but their role is different. icon shape changed recent years from rough 2d pictures to photo realistic visualization. windows aero (vista) icons set compared to previous version (xp-style) is more realistic than illustrative, toolbar icons have less detail and no perspective to optimize for smaller sizes and visual distinctiveness [7]. visualization technologies fascination will end, when we understood that effective pictogram recognition is not the matter of realism level but rather association. content is the most conservative element and once spread out, becomes standard de facto. good example of such standard is icon for save operation. everyone recognizes icon with 3,5” diskette instantly, but who in 5 years will know what is shown on that icon? sometimes geinformatics fce ctu 2008 83 toolbar icons for gis applications content is not directly related with function and when used in domain specific icon group can be difficult to recognize by new user. there are many discussions on that problem – should we be conservative preserving old symbols, which are part of our history or try to find better ones. understanding of color’s role and its usage changed when accessibility started to be an important issue. any message, including graphics, should be accessible to everyone, so color cannot be used as primary or unique method of communication. in time of globalization this is a big challenge but color related problems are even more complicated. colors and symbols have cultural context and sometimes even religious connotations. in some places white color is related to wedding while in others with funeral. the same problem is with black. but not only the color is very sensitive element of message. drawing forefinger we do not know often what connotation it has in other cultures. the last important element of icon communication is context in which it exists. left arrow can represent direction of movement, speed of movement or some conventional operation like undo, import or export. it depends on neighboring icons. context can simplify of complicate message, so icons final location should be considered already at design stage. snapshot of selected gis toolbar icons just to give an idea of diversity and different approaches in design on following figures (5-14) selected gis applications toolbar are presented. fig.5 grass 6.3 toolbar. fig.6 qgis 1.0 toolbar. fig.7 arcmap toolbar. fig.8 geomedia viewer 5.2 toolbar. geinformatics fce ctu 2008 84 toolbar icons for gis applications fig.9 gvsig toolbar. fig.10 idrisi32 toolbar. fig.11 mapinfo 8 toolbar. fig.12 openjump toolbar. fig.13 thurban toolbar. fig.14 udig toolbar. implementation of gis-0.1 icon set for grass and qgis when designing gis domain icons, several assumptions were taken into account. some of them are obvious, but hard to implement like recognizability and transferability. others are controversial, but in my opinion worth to test. grass (with wxpython) and qgis were chosen for tests implementation. both applications are ready for easy themes implementation, so everyone is able to customize icon sets. new, wxpythons-based gui of grass [6] uses as standard silk icon set [8] which is nice and well designed, but not always able to address gis needs. there are also other interesting projects related to icons development, like tango [10], but all of them are of general purpose. toolbar block context geinformatics fce ctu 2008 85 toolbar icons for gis applications there are two approaches to icon design within toolbar. first one is declarative. icon is selfexplaining without any additional information. making icon for ”add layer” we need object (layer) and action (add) picture. second one is simplified (contextual). in this approach we divide toolbar to caption with object (inactive) and icons with only actions. so ”add layer” can be represented just by action (add) and the object will be known from context – layer toolbar. concept 1a: where possible, decompose object from action and create icons consisting of both elements. this concept is based on methodology described be y.gilyov [4]. icon can be solid or contain two elements – object part and action. where possible, object-action approach should be used. if action primitives are well defined, they become reusable. it simplifies regonizability. good example in this direction is ‘add’ action, which is used in wide range of icons. action part should be placed probably in lower right part of the icon, framed by semi-transparent background (fig.15). transparency enables partial use of action area by object part, while not disturbing too much action part. there is only one limitation. as space for action part is very limited, action primitive must be really simple. fig.15 object-action method of icon design concept 1b: group icons by object. the second (contextual) design is probably more scalable and easier to implement especially for small size icons. we need just one set of action icons for any object – add, remove, etc. in many applications it is difficult to figure toolbar context. usually we know it just because we use application, but for beginner this is a big challenge. sometimes simplified design leads to misunderstanding. the most popular and most frequently used icons (new, open, save) are first in toolbar. but they are without any additional information. we know that they correspond to the root object in object’s tree. but sometimes it is difficult to guess what is the root object. in gis application it can be composition (idrisi), mapset (grass), project (qgis) or maybe something else. why not to show it explicitly. here we come to conclusion – every simplified toolbar should have at the beginning graphical caption (icon) representing object (fig.16). of course the visual representation should be different from action icons. content icon should be simple and easy to guess. let’s analyze gis related symbols from table 4. geinformatics fce ctu 2008 86 toolbar icons for gis applications fig.16 contextual method of icon design close 84 refresh 65 10 save 60 10 edit 53 5 display 33 9 open 20 12 4 map 15 14 9 export 15 7 import 12 5 exit 11 11 9 pan 11 4 layer 6 5 show 5 1 table 4 – common gis icons representation based on first 100 hits in google picture search. the most unambiguous sign is corresponding to close action. but we decided to use it for delete action already. one of possible solutions can be use of synonym which in this case is exit action represented by . save icon have two main symbols with predominance of . but technology changes very fast. what to do with historical object in our icons? is it better to use physical objects or some metaphors? concept 2: new, more neutral objects or metaphors can replace some old-technology icons. there’s a push to get rid of the file-folder metaphor and floppy disk 3,5” for saving. icon should not rely on current technology visualisation. those symbols are used because everyone is familiar with them. second sign is far more neutral and universal. similar situation is with open action, which is related to folder picture and arrows. map icon is very difficult case. regular connotation with globe is proper one, but not the best from gis point of view. second the most frequent is 3d view of paper map . on import/export example we can see problems of interpretation. in this case majority is probably right and when we export, arrow must go ”from” object. synthesis of this action geinformatics fce ctu 2008 87 toolbar icons for gis applications with proposal of more neutral icons for open and save actions is presented on fig.17. fig.17 basic actions – export, import, open, save. pan operation is represented by or fingers, but we must remember about cultural connotations, so this sort of signs should be avoided. so for pan we choose . layer object is represented by three parallel rectangles with supremacy of 2d view. show operation is assigned human eye sign . explicit or not last concept is based on observation that for fast and easy perception not whole object is needed. concept 3: not whole object or symbol must be shown, to be recognized properly. this can be seen in favicons design and in some modern interfaces. one of good implementation examples can be virtualbox2 interface. if properly designed, this could solve problem with very limited size of icon. at this stage of research implementation of this concept was not tested yet. final note presented concept and practical implementations of gis-0.1 icons set are still under development. recent version is available under http://www.szczepanek.pl/icons.grass. references 1. berlin k. (2007), gis usability games online3 2. creating windows xp icons, windows xp technical articles, 2001 online4 3. designing toolbar icons, apple human interface guidelines online5 4. gilyov y. (2007): designing an iconic language online 6 5. kortunov d. (2008): 10 mistakes in icon design online7 2 http://www.virtualbox.org/wiki/screenshots 3 http://www.karsten-berlin.net/gisusability.php?top=games 4 http://msdn.microsoft.com/en-us/library/ms997636.aspx 5 http://developer.apple.com/documentation/userexperience/conceptual/applehiguidelines/xh \ igicons/chapter 15 section 9.html 6 http://turbomilk.com/blog/cookbook/usability/designing an iconic language/ 7 http://turbomilk.com/blog/cookbook/criticism/10 mistakes in icon design/ geinformatics fce ctu 2008 88 http://www.virtualbox.org/wiki/screenshots http://www.szczepanek.pl/icons.grass http://www.karsten-berlin.net/gisusability.php?top=games http://msdn.microsoft.com/en-us/library/ms997636.aspx http://developer.apple.com/documentation/userexperience/conceptual/applehiguidelines/xhigicons/chapter_15_section_9.html http://turbomilk.com/blog/cookbook/usability/designing_an_iconic_language/ http://turbomilk.com/blog/cookbook/criticism/10_mistakes_in_icon_design/ toolbar icons for gis applications 6. landa, m. (2007): gui development for grass gis. geoinformatics fce ctu 2007, workshop proceedings vol. 2, prague online8 7. microsoft windows vista development center online9 8. silk icons online10 9. szczepanek r. (2008): custom icons for grass online11 10. tango desktop project online12 8 http://geoinformatics.fsv.cvut.cz/wiki/index.php/gui development for grass gis 9 http://msdn.microsoft.com/en-us/library/aa511280.aspx 10 http://www.famfamfam.com/lab/icons/silk/ 11 http://www.szczepanek.pl/icons.grass/ 12 http://tango.freedesktop.org/tango desktop project geinformatics fce ctu 2008 89 http://geoinformatics.fsv.cvut.cz/wiki/index.php/gui_development_for_grass_gis http://msdn.microsoft.com/en-us/library/aa511280.aspx http://www.famfamfam.com/lab/icons/silk/ http://www.szczepanek.pl/icons.grass/ http://tango.freedesktop.org/tango_desktop_project geinformatics fce ctu 2008 90 optimalizace procesu generováńı map pomoćı xml otakar čerba odděleńı geomatiky, katedra matematiky, fakulta aplikovaných věd, západočeská univerzita v plzni ota.cerba@seznam.cz kĺıčová slova: kartografické procesy, xml, atlas mezinárodńıch vztah̊u, optimalizace abstrakt př́ıspěvek je zaměřený na optimalizaci proces̊u použitých při tvorbě atlasu mezinárodńıch vztah̊u [wai2007]. tento atlasový projekt vznikl v létech 2006-2007 na p̊udě západočeské univerzity v plzni. jednotlivé mapy tohoto tǐstěného atlasu byly generovány metodami webové kartografie, přičemž šlo o prvńı rozsáhlý projekt, kdy byly použity xml (extensible markup language) technologie pro generováńı tematických map ve formě klasického atlasu. proto se během tvorby atlasu a po jej́ım ukončeńı objevily nedostatky, které by měly být v př́ıpadě daľśıho využ́ıváńı vytvořených postup̊u odstraněny nebo alespoň částečně eliminovány. př́ıspěvek se skládá ze tř́ı část́ı, které popisuj́ı vytvořenou publikaci, princip generováńı map atlasu a navrhovaná zlepšeńı použ́ıvaných postup̊u. úvod – atlas mezinárodńıch vztah̊u atlas mezinárodńıch vztah̊u [wai2007] je publikace, která byla vytvořena desetičlenným autorským kolektivem pod vedeńım phdr. šárky waisové, phd. součást́ı autorského kolektivu byli zástupci katedry politologie a mezinárodńıch vztah̊u filosofické fakulty západočeské univerzity v plzni (zču), kteř́ı zajǐst’ovali textovou část a sběr dat. tvorba kartografických výstup̊u byla úkolem člen̊u odděleńı geomatiky, které spadá pod katedru matematiky fakulty aplikovaných věd zču. tvorbu kartografické části zajǐst’ovali ing. magdaléna baranová, doc. ing. václav čada, csc., ing. et mgr. otakar čerba a ing. karel jedlička. důvodem vzniku atlasu byla předevš́ım absence podobné publikace na českém trhu. atlas by mohl být využ́ıvaný širokým spektrem odborńık̊u z nejr̊uzněǰśıch obor̊u (politologie, politická geografie, studium mezinárodńıch vztah̊u, zahraničńı obchod apod.) a studenty př́ıslušných vědńıch na univerzitách i na středńıch školách. své mı́sto by tato publikace mohla nalézt i v knihovnách laik̊u, předevš́ım v souvislosti s rasantńımi změnami, které na politické mapě světa odehrávaj́ı a také se stále rostoućım významem politiky a globálńıch vztah̊u v životě běžného člověka. geinformatics fce ctu 2007 101 optimalizace procesu generováńı map pomoćı xml finálńı verze atlasu obsahuje 72 map (viz následuj́ıćı tabulka) a doprovodných text̊u na v́ıce než 150 stránkách. atlas je dále doplněn daľśımi obrázky, tabulkami a grafy [čer2007a], [bar2007]. druh map počet map politická mapa světa 1 fyzicko-geografická mapa světa 1 tematické mapy 47 atypické mapy 4 mapy politicko-geografických region̊u 4 detailńı mapy 15 celkem 72 table 1 – struktura atlasu mezinárodńıch vztah̊u princip generováńı map kromě obsahu, grafické úpravy a rozměr̊u byl základńım limitńım faktorem pro tvorbu tematických map atlasu mezinárodńıch vztah̊u výstupńı formát. veškeré kartografické výstupy bylo potřeba vytvořit ve formátu pdf (portable document format), který požadovala tiskárna vydavatelstv́ı aleš čeněk. postup tvorby map: 1. navržeńı základńıho rámce map (měř́ıtková řada, grafický design, umı́stěńı a tvar základńıch kompozičńıch prvk̊u apod.; v́ıce viz [čer2007a]). 2. volba kartografického zobrazeńı (autorský kolektiv vybral modifikované vyrovnávaćı zobrazeńı times; v́ıce viz [bar2007]). 3. volba interpretačńıch metod (viz tab 2 [čer2007a], [bar2007]). 4. generováńı map. 5. kontrolńı a validačńı procedury. ačkoli finálńım produktem měla být tǐstěná verze atlasu, jednotlivé mapy byly generovány technikami webové kartografie. hlavńım d̊uvodem byla předevš́ım předpokládaná aktualizace map, kterou si bezpochyby vyžádá překotný vývoj na poli mezinárodńıch vztah̊u. vytvořené šablony je možné využ́ıvat i pro daľśı zpracováńı prostorových dat. výstupem námi použité metody byly mapy ve formátu svg (scalable vector graphics), které byly pomoćı programů inkscape (grafická editace a transformace do formátu postscript) a gsview32.exe, a ghostscript graphical interface převedeny do formátu pdf určeného pro výsledný tisk. pro generováńı map byly použity formáty založené na bázi značkovaćıho jazyka xml (extensible markup language) a př́ıbuzné technologie. konkrétně se jednalo o formáty ([čer2007a], [bar2007]): 1. vlastńı xml schéma (soubor atlas.xml) popisuj́ıćı jednotlivé mapy atlasu, mapové geinformatics fce ctu 2007 102 optimalizace procesu generováńı map pomoćı xml symboly, barevné stupnice (barevné stupnice většinou pocháźı z webových stránek cynthie a. brewer1) a formáty výsledných map. 2. xml namespaces (jmenné prostory) umožňuj́ıćı použ́ıvat v jednom dokumentu v́ıce typ̊u značeńı neboli v́ıce značek (tag̊u) pro elementy a atributy. např́ıklad dokument atlas.xml obsahuje elementy definované ve vlastńım značeńı a také prvky ze schématu svg. podobně i transformačńı styl se skládá z prvk̊u jazyk̊u xslt, xlink a svg. 3. jml neboli jump gml (geography markup language) představuj́ıćı specifickou podmnožinu jazyka gml, který je primárně určený pro popis geografických dat. formát jml sloužil ke kódováńı geoprostorových i atributových dat. 4. xslt (extensible stylesheet language transformation), který je součást́ı komplexńıho stylového a transformačńıho jazyka xsl (extensible stylesheet language). xslt se použ́ıvá pro transformaci xml do jiných, nejen xml formát̊u. transformačńı styl slouž́ı k převodu datových soubor̊u (formáty jml a xml) na vlastńı mapu (formát svg). pro tvorbu atlasu byly nejprve použ́ıvány šablony (soubor atlas.xsl) zapsané v kombinaci prvńı verze jazyka xslt a jej́ıho rozš́ı̌reńı exslt. v červnu 2006 došlo k přepsáńı šablon nové verze xslt 2.0. 5. xpath (xml path language) představuje dotazovaćı jazyk určený pro výběr jednotlivých část́ı xml dokumentu. při tvorbě atlasu byl jazyk xpath použ́ıvaný pro výběr jednotlivých část́ı mapy (např. zeměpisná śıt’, popisky, diagramy apod.), které byly následně zpracovávány transformačńım procesorem podle zásad transformačńıho stylu. v pr̊uběhu tvorby atlasu došlo podobně jako v př́ıpadě xslt ke změně verze xpath – nyńı je v souboru atlas.xsl použ́ıván xpath 2.0. 6. xlink (xml linking language) umožňuje odkazy mezi xml dokumenty i mezi jeho částmi. oproti odkaz̊um známým z html umožňuje i dvousměrné nebo dokonce v́ıcesměrné odkazy. v jednotlivých mapách se xlink 1.0 použ́ıvá pro odkazy na př́ıslušné barevné přechody, přičemž p̊uvodně měl sloužit také k vytvořeńı vazeb mezi popisem symbolu a jeho lokalizaćı. 7. svg (scalable vector graphics) je otevřený vektorový formát určený předevš́ım pro popis a distribuci dvourozměrných vektorových dat v prostřed́ı internetu (v́ıce viz [čer2006b]). mapy byly generovány ve formátu svg 1.1. 8. xhtml (extensible hypertext markup language) je jazyk určený pro popis obsahu www stránek. jedná se o př́ımého následńıka velice populárńıho jazyka html (hypertext markup language), resp. o propojeńı html a xml. v projektu atlas mezinárodńıch vztah̊u se jazyk xhtml 1.0 strict společně s kaskádovými styly použil pro definováńı webových stránek s jednotlivými mapami, které sloužily pro jejich prohĺıžeńı a př́ıpadné revize. 9. css (cascading stylesheet, kaskádové styly) je jednoduchý stylový jazyk už́ıvaný předevš́ım ve spojeńı s html k definováńı vizualizačńıch pravidel. v našem př́ıpadě kaskádové styly (verze 2.1) posloužily jednak pro popis vizualizačńıch pravidel pro webovou stránku s ukázkami map a předevš́ım pro určeńı vizualizačńıch pravidel jednotlivých map. k 1 http://www.personal.psu.edu/cab38/colorbrewer/colorbrewer intro.html geinformatics fce ctu 2007 103 http://www.personal.psu.edu/cab38/colorbrewer/colorbrewer_intro.html http://www.personal.psu.edu/cab38/colorbrewer/colorbrewer_intro.html optimalizace procesu generováńı map pomoćı xml mapám jsou kaskádové styly připojeny dvěma zp̊usoby – společné vizualizačńı vlastnosti jsou k mapám připojeny pomoćı exterńıho stylu (zakladni styl.css) a některé specifické vlastnosti jsou popsány pomoćı inline styl̊u a xml prezentačńıch atribut̊u – oba zp̊usoby jsou zapsány jako atributy př́ıslušných element̊u. vlastńı generováńı map prob́ıhalo na principu přǐrazeńı vizualizačńıho stylu jednotlivým datovým soubor̊u. př́ıslušný styl a vstupńı data byly zpracovány pomoćı xslt procesoru – v př́ıpadě atlasu mezinárodńıch vztah̊u byl použitý volně šǐritelný produkt saxon 8.8 verze b, který implementuje jazyky xslt 2.0, xpath 2.0 (starš́ı verze xslt a xpath jsou také podporovány), xquery 1.0 a xml schema 1.0. vı́ce o generováńı map prostřednictv́ım xslt styl̊u viz [ten2003], [čer2006a], [čer2007b]. optimalizace procesu generováńı map zvolený postup – generováńı map ve formátu xml z xml dat pomoćı styl̊u zapsaných v xml s sebou přináš́ı následuj́ıćı výhody [bar2007]: � aplikace je sestavena výhradně z nekomerčńıho software. uživatel může téměř všechny finančńı prostředky použ́ıt na nákup kvalitńıch datových sad. � s xml soubory lze pracovat bez ohledu na použ́ıvané technologie, operačńı systém nebo softwarové vybaveńı. uživatelé xml maj́ı k dispozici velké množstv́ı nejr̊uzněǰśıho software – editory, parsery (programy pro kontrolu xml syntaxe), validátory, prohĺıžeče, xslt procesory (prostředky pro práci se stylovými jazyky), konvertory a daľśı. velký pod́ıl mezi xml software maj́ı programy š́ı̌rené pomoćı nějaké otevřené licence. � xml je velice rozš́ı̌renou technologíı – na internetu se nacháźı obrovské množstv́ı informaćı ve formě článk̊u, př́ıspěvk̊u z odborných konferenćı, tutoriál̊u, mailových konferenćı apod. např́ıklad téměř jedna čtvrtina všech př́ıspěvk̊u na konferenci svg open 2005 v nizozemském eshende byla věnována geografickým informačńım technologíım, předevš́ım tvorbě map. � pomoćı stylových a transformačńıch jazyk̊u docháźı k odděleńı obsahu dokumentu od vizualizačńıch pravidel. proto lze xml dokumenty velice snadno přizp̊usobovat konkrétńım potřebám uživatele. na druhou stranu použ́ıváńı styl̊u s sebou přináš́ı jednotný vzhled všech dokument̊u s možnost́ı jednoduché a předevš́ım rychlé aktualizace. � za celou aplikaćı stoj́ı pouze jediná technologie. nav́ıc technologie, která se velice rychle vyv́ıj́ı, ale je podporovaná (v r̊uzné mı́̌re) většinou světových výrobc̊u software. při vlastńı tvorbě map docházelo k řadě chyb, které byly zp̊usobeny předevš́ım nezkušenost́ı s projekty obdobného rozsahu a tématického zaměřeńı (za upozorněńı na chyby a postřehy v geinformatics fce ctu 2007 104 optimalizace procesu generováńı map pomoćı xml pr̊uběhu tvorby map i po vydáńı publikace je zapotřeb́ı poděkovat celému autorskému kolektivu, dále předevš́ım ing. jánu pravdovi, drsc. a také prof. rndr. vı́tu vožeńılekovi, csc., doc. rndr. jaromı́ru kaňokovi, csc. a mgr. monice čechurové, phd.). navrhované změny, které částečně eliminuj́ı zjǐstěné nedostatky, můžeme rozdělit do pěti základńıch skupin: 1. popis generovaných map. 2. optimalizace vstupńıch dat. 3. optimalizace transformačńıch šablon. 4. odstraněńı kartografických nedostatk̊u. 5. zlepšeńı datového a komunikačńıho toku. popis generovaných map v současné době prudce nar̊ustá počet geoprostorových dat, včetně kartografických výstup̊u. proto je nutné vytvářet podrobné a standardizované popisy veškerých datových soubor̊u. tentýž problém řeš́ı také směrnice inspire (infrastructure for spatial information in europe), jej́ımž jedńım úkolem je prosazeńı použ́ıváńı metadat. metadatový popis je d̊uležitý nejen z legislativńıho hlediska, ale také z pohledu sémantického webu, kdy metadatové záznamy umožńı efektivněǰśı vyhledáváńı a použ́ıváńı nejr̊uzněǰśıch katalogových služeb. pro př́ıpad vytvořeńı elektronické verze atlasu nebo aktualizaćı tǐstěné verze atlasu v digitálńı formě bude nutné doplnit generované mapy metadatovými záznamy. obsah metadat by se měl ř́ıdit mezinárodńı normou iso 19115 a směrnićı inspire. z hlediska formátu je d̊uležitá jeho otevřenost a standardizace – jako nejvhodněǰśı se jev́ı formát na bázi xml dcmi (dublin core metadata initiative). zařazeńı xml metadatového formátu umožńı generováńı některých metadatových záznamů pomoćı xslt př́ımo z popisu map (např. název mapy). daľśı možnost́ı je vložeńı metadatových záznamů ve formátu dcmi př́ımo do popisu mapy (soubor atlas.xml) pomoćı xml namespaces. metadatový popis by měl být připojen nejen k jednotlivým mapám, ale také k vytvořeným šablonám, popisným soubor̊um, schémat̊um, styl̊um nebo webovým stránkám, které jsou ned́ılnou součást́ı celého projektu. optimalizace vstupńıch dat vstupńı datové vrstvy byly převzaty z datových sad distribuovaných společnost́ı esri. data obsahovala drobné chyby z hlediska obsahu, např́ıklad španělsko přǐrazené do afriky nebo chyběj́ıćı zakresleńı státu vatikán. vzhledem ke stář́ı dat bylo zapotřeb́ı také doplněńı nového státńıho celku – východńı timor, a změn atribut̊u u některých zemı́ (např́ıklad přejmenováńı státu zair na kongo). vzhledem k rozsahu dat a redundantńı podrobnosti by bylo vhodné před vlastńım zpracováńım data generalizovat. jedná se předevš́ım o zjednodušeńı obrys̊u kontinent̊u (pro potřeby geinformatics fce ctu 2007 105 optimalizace procesu generováńı map pomoćı xml atlasu byla data velice podrobná) a spojeńı některých jednoduchých liníı do řetězce liníı. jedinou formou generalizace aplikovanou na vstupńı data bylo odstraněńı nadbytečných atribut̊u (např. kódy měn apod.). daľśı změnou, která byla nezbytná z pohledu procesu transformace dat do kartografického výstupu, bylo převedeńı p̊uvodńıch dat z formátu shapefile do formátu jump gml. tato změna znamenala nár̊ust objemu dat o v́ıce než 400%. mezi vstupńı data se také zdrojový dokument atlas.xml, který obsahuje popis jednotlivých map a daľśıch použitých nástroj̊u, např́ıklad kartografických barevných stupnic. základńım vylepšeńım tohoto souboru je vytvořeńı schématu, které bude popisovat jednotlivé prvky dokumentu, jejich vzájemné vazby a př́ıpadná omezeńı nebo tzv. ” business rules“. schéma by mělo sloužit také ke kontrole a validaci zdrojového souboru a zároveň by mělo zajistit přenositelnost tohoto základńıho stavebńıho prvku aplikace do jiných projekt̊u a př́ıpadná rozš́ı̌reńı. tento schémový soubor by mohl být součást́ı širš́ıho schématu jazyka popisuj́ıćı mapy a jiné kartografické produkty. součást́ı schématu by mohly být vazby na již existuj́ıćı xml deriváty zabývaj́ıćı se geografickými daty (metadatový popis, jazyk definuj́ı diagramové prvky mapy apod.). jako v současnosti nejvhodněǰśı schémový jazyk se jev́ı relax ng doplněný o datové typy použ́ıvané v jazyku w3c xml schema a některé konstrukce jazyka schematron. optimalizace transformačńıch šablon daľśı vylepšeńı se týká také transformačńıch šablon, které sloužily k převáděńı vstupńıch soubor̊u prostorových dat na digitálńı mapy publikované v atlasu mezinárodńıch vztah̊u. optimalizace šablon spoč́ıvá jednak ve zkráceńı kódu (odstraněńı nadbytečných prvk̊u, modularizace, sestavováńı jednotlivých interpretačńıch kartografických metod jako sekvenci základńıch modul̊u) a také v implementaci všech nových prvk̊u jazyk̊u xslt 2.0 a xpath 2.0. z pohledu aplikace transformačńıch a dotazovaćıch jazyk̊u založených na bázi xml do kartografie jsou d̊uležité zejména následuj́ıćı vlastnosti obou výše uvedených jazyk̊u: � pro digitálńı kartografii (předevš́ım pro generováńı map) je výhodná práce sekvencemi a textovými řetězci, které mohou představovat seznamy souřadnic (např. ve formátu gml nebo svg). otázkou je rychlost transformačńıch procesor̊u, které jsou většinou napsány v javě, při zpracováńı takového objemu dat, který je v oblasti geoinformačńıch technologíı běžný. � prohledáváńı a rozřazováńı rozsáhlých dokument̊u obsahuj́ıćı prostorová data s velkým počtem atribut̊u zjednoduš́ı a zřejmě také zrychĺı použ́ıváńı kĺıč̊u a možnost seskupováńı dat na základě zadaného výrazu (velice jednoduše se budou např́ıklad řadit obce na základě př́ıslušnosti k obci s rozš́ı̌renou p̊usobnost́ı). � xslt 2.0 integrovala řadu funkćı exslt, které jsou při tvorbě digitálńıch map nezbytné. např́ıklad se jedná o matematické funkce (součet, pr̊uměr, maximum, minimum) použ́ıvané při tvorbě graf̊u a diagramů při generováńı kartodiagramů nebo při generováńı interval̊u stupnic při generováńı kartogramů. � práce s datovými typy xml schema, které jsou přeb́ırány i do daľśıch aplikaćı (např. jazyky relax ng, owl) je d̊uležitá z hlediska tvorby obecného sémantického dokugeinformatics fce ctu 2007 106 optimalizace procesu generováńı map pomoćı xml mentu a také snažš́ı kontrole správnosti dokumentu (zabráńı se tak např́ıklad použ́ıvańı textových řetěz̊u mı́sto č́ısel apod.). � práce s regulárńımi výrazy patř́ı mezi daľśı výhody druhé verze xslt. např́ıklad v svg souborech p̊ujde odstranit vysoké hodnoty jednotlivých souřadnic (dojde ke zmenšeńı velikosti soubor̊u), ” odř́ıznutá“ hodnota bude do souboru vrácena pouze jednou ve formě translačńı transformace. � xslt zásadně změnilo charakter. od stylového jazyku (jakési dokonaleǰśı verze kaskádových styl̊u) se posouvá sṕı̌se do oblasti programovaćıch jazyk̊u, o čemž svědč́ı doplněńı a zdokonaleńı práce s funkcemi, podmı́něné výrazy apod. [nič2005] vyšš́ı úrovni optimalizace bráńı ńızká úroveň implementace xslt 2.0 a xpath 2.0 (stejný problém maj́ı i jiné xml technologie jako např́ıklad svg) v r̊uzných nástroj́ıch. výjimku tvoř́ı xslt procesor saxon použ́ıvaný pro generováńı map atlasu. v budoucnosti (v př́ıpadě daľśıho využ́ıváńı transformačńıch styl̊u) by měly být do transformačńıho stylu doplněny daľśı šablony pro generováńı daľśıch kartografických interpretačńıch metod (r̊uzné typy kartogramů nebo kartodiagramů) a pro analýzu dat (tvorba stupnic, grafy četnosti...). v souvislosti s rozš́ı̌reńım transformačńıho stylu bude dokument atlas.xsl modularizován (rozdělen na několik menš́ıch vzájemně propojených soubor̊u). odstraněńı kartografických nedostatk̊u kartografické nedostatky se projevuj́ı předevš́ım d́ıky tzv. ” autorské slepotě“, d́ıky ńıž tv̊urc̊um map uniknou některé d́ılč́ı nedostatky ve využ́ıváńı kartografických interpretačńıch metod. v př́ıpadě atlasu mezinárodńıch vztah̊u se jedná o � volbu kartografického zobrazeńı, která byla determinována ” geodetickým pohledem“ na zobrazovanou problematiku. jinými slovy při volbě kartografického zobrazeńı byla d̊uležitá minimalizace zkresleńı a výsledná kompozice mapy před zvýrazněńım oblast́ı, které jsou z pohledu mezinárodńıch vztah̊u potřebné (např. rovńıková afrika) – ” geografický pohled“. � různé velikosti ṕısma použité na politické mapě světa evokuj́ı r̊uzný význam popisovaných objekt̊u (stát̊u). � otázky vyvolala také použitá terminologie. vzhledem k poměru map a doprovodného textu by mohla být publikace označena nikoli jako atlas, ale sṕı̌se jako mapová encyklopedie. zlepšeńı datového a komunikačńıho toku zlepšeńı datového a komunikačńıho toku (např. komunikace mezi členy autorského kolektivu, verzováńı jednotlivých fáźı, forma, zp̊usob a četnost zálohováńı apod.) patř́ı v současnosti mezi aktuálńı problémy kartografie. činnost kartografa by měla spoč́ıvat v ř́ızeńı a vedeńı celého atlasového projektu a také v korigováńı ” autorské slepoty“ spolupracovńık̊u (tv̊urc̊u dat, marketingových specialist̊u, designér̊u apod.). geinformatics fce ctu 2007 107 optimalizace procesu generováńı map pomoćı xml v našem konkrétńım př́ıpadě se ukázala jako velice problematická komunikace mezi jednotlivými členy rozsáhlého kolektivu autor̊u. řešeńım by mohlo být např́ıklad použit́ı softwaru pro vedeńı projekt̊u (pro př́ıklad uved’me produkt microsoft office project 2007). v pr̊uběhu praćı jsme se pokusili zavést alespoň online tvorbu a sd́ıleńı dokument̊u (docs.google.com), což se však nakonec nesetkalo s patřičným ohlasem. data byla předávána v r̊uzných formátech (často se jednalo o texty ve formátu doc nebo tabulky ve formátu xls), které nemohly být automaticky převáděny do formátu zpracovatelných prostřednictv́ım geoinformačńıch technologíı. v procesu generováńı map se vyskytuj́ı i daľśı riziková mı́sta, která ovšem nejdou eliminovat změnou pracovńıho postupu. změnu muśı vyvolat předevš́ım výrobci podp̊urného software. mezi nejvýrazněǰśı nedostatky patř́ı slabš́ı podpora formátu svg a rychlost zpracováńı velkého množstv́ı dat interpretačńım jazykem java. v atlasu nejsou využity veškeré možnosti vektorového formátu svg. svg např́ıklad umožňuje velice elegantńı a jednoduché definovańı symbol̊u. ty je možné zapsat pouze jednou a daľśı použit́ı těchto symbol̊u lze zajistit pomoćı odkaz̊u, který je zapsaný v jazyku xlink. u takového symbolu je možné nejen zadat souřadnice nového umı́stěńı, ale ” novému” symbolu lze přidat ” nové” atributy, např́ıklad transformaci (svg umožňuje použ́ıváńı změny měř́ıtka, zkoseńı, posun a rotace) nebo jiný vizualizačńı styl. bohužel ne každý software umožňuje regulérńı předáváńı symbol̊u pomoćı xlink odkaz̊u (stejné typ odkaz̊u ovšem funguje v př́ıpadě barevných přechod̊u – gradient̊u). proto bylo nutné veškeré symboly pomoćı stylu a transformačńıho procesu do výsledné mapy koṕırovat, č́ımž se zvláště v př́ıpadě složitých symbol̊u zvětšila velikost výsledného souboru. slabš́ı podporu ze strany výrobc̊u software maj́ı i daľśı vlastnosti formátu svg jako např́ıklad animace, vzorky, ořezové cesty nebo podpora multimédíı. jedńım z daľśıch problémů byla rychlost aplikaćı založených na interpretačńım jazyku java. v jazyce java byly vytvořeny stěžejńı aplikace použ́ıvané pro zpracováńı map atlasu, jako např́ıklad transformačńı procesor saxon, gis software pro zpracováńı dat openjump a grafický editor inkscape – d̊uvodem pro výběr těchto aplikaćı byly předevš́ım multiplatformnost a otevřenost. tyto aplikace velice obt́ıžně a pomalu zpracovávaly rozsáhlá data (pr̊uměrný jml soubor – 14,5 mb, pr̊uměrný svg soubor – 8,5 mb, pr̊uměrný ps soubor – 14 mb, výsledná mapa ve formátu pdf – 1,9 mb). závěr v české republice byla kartografická atlasová tvorba v nedávné minulosti poměrně opomı́jena. tento př́ıspěvek shrnuje zkušenosti źıskané během v́ıce než ročńı práce na atlasu mezinárodńıch vztah̊u. navrhovaná zlepšeńı lze rozdělit do dvou skupin: 1. optimalizace týkaj́ıćı se použité technologie, která je svým zp̊usobem specifická a nav́ıc je stále ve vývoji. 2. optimalizace standardńıch kartografických proces̊u (źıskáváńı, hodnoceńı a modifikace vstupńıch dat, kartografické postupy, metody a použité prvky). 3. zkvalitněńı komunikačńıch proces̊u v rámci autorského kolektivu. geinformatics fce ctu 2007 108 optimalizace procesu generováńı map pomoćı xml použ́ıváńı webových technologíı při tvorbě atlas̊u je velice zaj́ımavou technologíı, která odpov́ıdá současným aktuálńım trend̊um světové kartografie (ica research agenda [vir2007]). z tohoto d̊uvodu by mohly navrhované optimalizačńı procesy sloužit jako podklad pro daľśı projekty podobného charakteru. seznam použitých zdroj̊u 1. [bar2007] baranová, m., čada, v., čerba, o. kartografická část atlasu mezinárodńıch vztah̊u. in kartografické listy 15. bratislava: kartografická spol’očnost’ slovenskej republiky, geografický ústav slovenskej akadémie vied, 2007, str. 5-12. isbn 80-89060-10-8. issn 1336-5274. 2. [čer2006a] čerba, o. cartographic e-documents & sgml/xml. in international symposium gis. ostrava 2006. ostrava: vysoká škola báňská – technická univerzita, 2006. dostupné z: 3. [čer2006b] čerba, o. svg v kartografii [online]. in geoinformatics fce ctu 2006. praha: 2006. issn 1802-2669. dostupné z: 4. [čer2007a] čerba, o. tvorba map pro atlas mezinárodńıch vztah̊u. in 9. odborná konference doktorského studia juniorstav 2007. brno: 2007. isbn 978-80-214-3337-3. 5. [čer2007b] čerba, o. xml technologies for cartographers. in xxiii international cartographic conference. moskva : international cartographic association, 2007. 6. [nič2005] nič, m. xslt 2.0 tutorial [online]. 13.12.2005. dostupné z: 7. [ten2003] tennakoon, w.t.m.s.b. visualization of gml data using xslt [online]. 2003. dostupné z: 8. [vir2007] virrantaus, kirsi, fairbairn, david. ica research agenda in cartography and gi science [online]. in ica news, number 48, june 2007. international cartographic association, 2007. dostupné z: 9. [wai2007] waisová, š.; baranová, m.; čada, v.; čerba, o.; jedlička, k.; šanc, d.; weger, k.; cabada, l.; romancov, m. atlas mezinárodńıch vztah̊u: prostor a politika po skončeńı studené války. 1. vyd. plzeň: aleš čeněk, 2007. 158 s. isbn 978-80-7380015-4. geinformatics fce ctu 2007 109 geinformatics fce ctu 2007 110 iso 19115 for geoweb services orchestration jan růžička institute of geoinformatics, vsb-tu of ostrava jan.ruzicka@vsb.cz keywords: iso 19115, geoweb, orchestration, bpel, midas, dublin core, inspire kĺıčová slova: iso 19115, geoweb, orchestrace, bpel, midas, dublin core, inspire abstract the paper describes theoretical and practical possibilities of iso 19115 standard in a process of generating dynamic geoweb services orchestras. there are several ways how to instantiate orchestras according to current state of services and user needs, some of them are briefly described in the paper. the most flexible way is based on metadata that describe geodata used by services. the most common standard used for geodata metadata in the eu is iso 19115. the paper should describe if the standard is able (without extensions) to hold enough information for orchestration purposes. the paper defines minimal set of metadata items named ”iso 19115 orchestration minimal” that must be available for geodata evaluation in a process of orchestration. a second part of the article will be probably less optimistic. it should describe how are (or were, or are planned to be) iso 19115 possibilities used for metadata creation nowadays in the czech republic. this part is based on analyses of iso 19115 core, midas system, dublin core and inspire metadata ir. abstrakt př́ıspěvek popisuje teoretické a praktické možnosti standardu iso 19115 v procesu tvorby dynamických orchestr̊u služeb platformy geoweb. v zásadě je možné vytvářet instance orchestr̊u mnoha zp̊usoby na základě aktuálńıho stavu služeb a požadavk̊u uživatele. některé z nich jsou stručně popsány v př́ıspěvku. nejpružněǰśı zp̊usob tvorby je založen na metadatech, které popisuj́ı geodata využ́ıvaná službami. v současné době je v rámci eu nejvyuž́ıvaněǰśım standardem standard iso 19115. př́ıspěvek by měl popsat zda je standard schopen (bez rozš́ıřeńı) pojmout všechny nezbytné položky pro potřeby orchestrace. v př́ıspěvku je definována minimálńı sada metadatových položek nazvaná ”iso 19115 orchestration minimal”, která je nezbytná pro posouzeńı geodat v procesu orchestrace. druhá část př́ıspěvku bude zřejmě méně optimistická nebot’ se bude zabývat jak to vypadá s reálnými možnostmi využit́ı geinformatics fce ctu 2008 51 iso 19115 for geoweb services orchestration potenciálu standardu iso 19115 pro orchestraci v rámci čr. tato část je založena na analýze iso 19115 core, systému midas, dublin core a inspire metadata ir. orchestras an orchestration is a process where are modelled processes (real or abstract) in a way of formalized description. a process modelling is a technique that uses several description tools, mainly schemas or diagrams, to describe usually real processes inside enterprise. the processes can lead across several organizations. a model of a process is transformed from abstract languages (bpmn (business process modelling notation), uml (unified modelling language)) to a form that can be directly run on a computer. in this area of runnable models of processes is the most known bpel (business process execution language). a process run means reading inputs, invoking web services, deciding according to results, repeating some parts of the process and other necessary operations. a process modelling offers possibilities how to formally describe processes inside an enterprise, to find duplicate processes, to find processes that are not optimised, etc. a process modelling helps with processes optimisation and with sources management optimisation. when it is possible, than the description is available in a form of bpel-like language and processes can be directly invoked. geoweb services orchestration can be done in many ways. the ga 205/07/0797 team has researched the two ways of possible orchestration. simple orchestras the first way is based on orchestras where the services searched during the building orchestra instance are using the same data sources in a meaning of data sources and algorithms. during the building orchestra instance are searched only services that use the same data source and the same algorithms for data source and input manipulation. data source content can change only on spatio-temporal extent of the working area. we can speak about services replication (or distribution in a horizontal plane). current instances of the services that are connected to the orchestra are selected according to current state of the services, such as performance, speed or provider. these services differ on physical binding. these kind of orchestras is focused on optimisation of orchestras run. for these kind of orchestras is not needed any specific manipulation. there is necessary to identify same services using some key. for our testing purposes we use common identification, based on standardisation organisation identification, standard identification, service identification. such identification is described on the following example. http://gis.vsb.cz/ogc/wms/1.1.1/zabaged/0.1. items are defined by url. first item is domain of the service type guarantee. second item is abbreviation of standardisation organisation name. third item is abbreviation of standard name. fourth item is a version of the standard. fifth item is abbreviation of the service. last item is a version of the service type. this type of orchestras is simpler to manage than the second one. geinformatics fce ctu 2008 52 http://gis.vsb.cz/ogc/wms/1.1.1/zabaged/0.1 iso 19115 for geoweb services orchestration dynamically created orchestras the second way is based on orchestras where current instances of the services can be just similar to each other in a meaning of data sources and algorithms. for example we can use service that uses railways data source where tracks are just simple lines between stations or we can use service that uses railways data source where tracks are modelled by real headway. we can switch between these sources in many cases, such as routing (finding the best routes) where the main parameter for routing is time. this type of orchestras is more difficult to manage than the first one. our research shows that usually the first type of orchestras will be used, but there are still situations when a system for orchestration should be able prepare second type of orchestras. there are two ways how to handle this problem. the first solution is simple, but difficult to manage in a meaning of long time term, because this solution is rather static than dynamic. there must be simple database (no matter how is organised – relational, xml) where are defined relations between data sources (services). related services can be named group of similar services. the second solution is based on data source evaluation based on metadata analyses. this article should describe, why is this way so complicated and probably impossible. metadata items useful for data evaluation in a process of searching available services for dynamic orchestras building we are looking for similar data sources. first of all we have to specify metadata items that can be used for evaluating that the data are similar enough for our orchestra. there are many different standards in this area that define metadata items, but nowadays probably the most important one is iso 19115 (iso 19139). for our research we identify only items from this standard. we can name this set of items iso 19115 orchestration full. later is described minimal set of the items that are necessary for running similarity tests. administrative metadata item description of usage and problems md metadata/ datestamp date that the metadata was created. useful for evaluation of metadata reliability. md metadata/ metadatamaitenance frequency and scope of metadata updates. useful for evaluation of metadata reliability. md identification/ resourcemaitenance frequency and scope of data updates. individual items are described later. geinformatics fce ctu 2008 53 iso 19115 for geoweb services orchestration md maintenanceinformation/ maintenanceandupdatefrequency userdefinedmaintenancefrequency updatescope updatescopedescription only supplemental information, but useful when information about temporal extent is not available md referencesystem a reference system is not necessary for analyses, but for using the service. usually we have enough information in epsg code, that is included in metadata for a service, but sometimes full description is necessary. table 1: administrative metadata items from iso 19115 orchestration full quality metadata item description of usage and problems md dataidentification/ spatialresolution md resolution/ equvivalentscale distance density of spatial data. very useful. we can use both options of the resolution, but the distance is better valuable. md metadata/ dataqualityinfo quality of a resource. individual items are described later. dq dataquality very important item. items (associations are described later). li lineage/ statement processstep source very useful items, but unfortunately only simple table of items and the free text domain is used. very difficult to handle free text for automatic evaluation. only items for defining source are not described only by free text, but this is not enough. dq element/ nameofmeasure measureidentification measuredescription evaluationmethodtype evaluationmethoddescription evaluationprocedure datetime result this abstract element should be completely included. of course the main item is result described later. geinformatics fce ctu 2008 54 iso 19115 for geoweb services orchestration dq result/dq conformanceresult/ specification explanation pass dq result/dq quantitativeresult/ valuetype valueunit errorstatistic value this items are quite well defined and useful for evaluation. even domains are good enough for automatic evaluation. dq completeness/ dq completenesscommission dq completenessomission described by dq element. dq positionalaccuracy/ dq absoluteexternalpositionalaccuracy dq griddeddatapositionalaccuracy dq relativeinternalpositionalaccuracy described by dq element. dq temporalaccuracy/ dq accuracyofatimemeasurement dq temporalconsistency dq temporalvalidity described by dq element. dq thematicaccuracy/ dq thematicclassificationcorrectness dq nonquantitativeattributeaccuracy dq quantitativeattributeaccuracy described by dq element. table 2: quality metadata items from iso 19115 orchestration full usage metadata item description of usage and problems md identification/ resourcespecificusage specific applications for which the resource was used. md usage/ specificusage userdeterminedlimitations very useful item, but unfortunately only the free text domain is used. very difficult to handle free text for automatic evaluation. md identification/ resourceconstraints constraints on a resource. individual items are described later. md constraints/ uselimitation very useful item, but unfortunately only the free text domain is used. very difficult to handle free text for automatic evaluation. geinformatics fce ctu 2008 55 iso 19115 for geoweb services orchestration md legalconstraints/ accessconstraints useconstraints otherconstraints very useful items, but unfortunately only simple table of items and the free text domain is used. very difficult to handle free text for automatic evaluation. information that there is copyright or license is not very useful for evaluation, if the resource can be used in orchestration. md securityconstraints/ classification usernote classificationsystem handlingdescription useful only in some very specific applications. only simple table of items and the free text domain is used. very difficult to handle free text for automatic evaluation. table 3: usage metadata items from iso 19115 orchestration full extent metadata item description of usage and problems md dataidentification/ extent ex extent/ description geographicelement temporalelement verticalelement ex geographicextent/ extenttypecode ex boundingpolygon/ polygon ex geographicboundingbox westboundlongitude eastboundlongitude southboundlatitude northboundlatitude ex geographicdescription/ geographicidentifier ex temporalextent/ extent ex verticalextent/ minimumvalue maximumvalue unitofmeasure verticaldatum spatio-temporal extent. for geographic extent is preferred polygon instead of bounding box. geinformatics fce ctu 2008 56 iso 19115 for geoweb services orchestration table 4: extent metadata items from iso 19115 orchestration full content and structure metadata item description of usage and problems md dataidentification/ spatialrepresentationtype method used for spatial representation. list of available items is very simple. we can use it only for distinguish between raster and vector. the other items described later must be used for better evaluation. md dataidentification/ language language used within the dataset. necessary for evaluation. we can use dataset with different language usually only when dealing only with geometry or topology. md dataidentification/ topiccategory main theme of the dataset. not very useful, but can be used for basic evaluation. md keywords/ keyword type thesaurusname more useful than topiccategory for basic evaluation. md gridspatialrepresentation/ numberofdimensions axisdimensionsproperties cellgeometry md dimension/ dimensionname dimensionsize resolution more precise information about grid. we can include also md georectified and md georeferenceable, but these are not necessary for analyses. md vectorspatialrepresentation/ topologylevel geometricobjects md geometricobjects/ geometricobjecttype geometricobjectcount more precise information about vector. number of object can be significant for analyses of similarity. md featurecataloguedescription/ featuretypes featurecataloguecitation information about used feature catalogue and selected set of features from the catalogue. md coveragedescription/ attributedescription contenttype dimension information about values in grid data cells. geinformatics fce ctu 2008 57 iso 19115 for geoweb services orchestration md imagedescription/ illuminationelevationangle illuminationazimuthangle imagingcondition imagequalitycode cloudcoverpercentage processinglevelcode compressiongenerationquantity triangulationindicator md rangedimension/ sequenceidentifier descriptor md band/ maxvalue minvalue units bitspervalue peakresponse tonegradation scalefactor offset information about digital image record. table 5: content and structure metadata items from iso 19115 orchestration full minimal set of metadata items for automatic data evaluation following list shows minimal set of metadata items, that must be available to test similarity of the analysed datasets. we can name this set as iso 19115 orchestration minimal. without these items are not metadata useful for running tests of similarity. this recommendation should be applied to all new created metadata. there are not included items, that are generally useful, but used domain for their specification is not suitable for automatic evaluation. some of the items are not applicable for all resources (e.g. you can not specify md band for vector data). md dataidentification/spatialresolution md resolution/equvivalentscale md resolution/distance md metadata/dataqualityinfo dq dataquality li lineage/source dq completenesscommission/dq element/dq result geinformatics fce ctu 2008 58 iso 19115 for geoweb services orchestration dq completenessomission/dq element/dq result dq absoluteexternalpositionalaccuracy/dq element/dq result dq griddeddatapositionalaccuracy/dq element/dq result dq relativeinternalpositionalaccuracy/dq element/dq result dq accuracyofatimemeasurement/dq element/dq result dq temporalconsistency/dq element/dq result dq temporalvalidity/dq element/dq result dq thematicclassificationcorrectness/dq element/dq result dq nonquantitativeattributeaccuracy/dq element/dq result dq quantitativeattributeaccuracy/dq element/dq result md dataidentification/extent ex extent/geographicelement/ex boundingpolygon/polygon ex extent/geographicelement/ex geographicboundingbox ex extent/temporalelement/ex temporalextent/extent ex extent/verticalelement/ex verticalextent md dataidentification/spatialrepresentationtype md dataidentification/language md dataidentification/topiccategory md keywords md keywords/keyword md keywords/type md keywords/thesaurusname md gridspatialrepresentation md gridspatialrepresentation/numberofdimensions md gridspatialrepresentation/axisdimensionsproperties md dimension/dimensionname md dimension/dimensionsize md dimension/resolution md gridspatialrepresentation/cellgeometry md vectorspatialrepresentation md vectorspatialrepresentation/topologylevel geinformatics fce ctu 2008 59 iso 19115 for geoweb services orchestration md vectorspatialrepresentation/geometricobjects md geometricobjects/geometricobjecttype md geometricobjects/geometricobjectcount md featurecataloguedescription md featurecataloguedescription/featuretypes md featurecataloguedescription/featurecataloguecitation md coveragedescription md coveragedescription/attributedescription md coveragedescription/contenttype md coveragedescription/dimension md rangedimension/sequenceidentifier md rangedimension/descriptor md band md band/maxvalue md band/minvalue md band/units md band/bitspervalue md band/peakresponse md band/tonegradation md band/scalefactor md band/offset md imagedescription md imagedescription/illuminationelevationangle md imagedescription/illuminationazimuthangle md imagedescription/imagingcondition md imagedescription/imagequalitycode md imagedescription/cloudcoverpercentage md imagedescription/processinglevelcode md imagedescription/compressiongenerationquantity md imagedescription/triangulationindicator geinformatics fce ctu 2008 60 iso 19115 for geoweb services orchestration expected metadata extent previously defined set of items named iso 19115 orchestration minimal will not be probably available generally in the future. we can expect that only a few closed communities e.g. companies can be able have all resources described in this level of detail. in general we can expect that available metadata will not be never so detailed. we can expect that metadata available in the czech republic are going to be prepared according to several types of detail. this is necessary to know for geodata evaluation. these types are: � metadata according inspire ir (inspire, 2007), � metadata according to iso 19115 core (iso/tc 211, 2003), � metadata according to dublin core basic set (dcmi, 2007), � metadata according to midas database (cagi, 2007) completeness. other alternatives are not expected. metadata according to inspire the list of items is used from draft implementation rules (inspire, 2007). level 1 is a basic level, that will be required always (if the conditional rule does not define different options). � resource title. � temporal reference – in a case when information is meaningful. � geographic extent of the resource. � resource language – in a case when text is used. � resource topic category. � keyword. � service type – in a case of a service. � resource responsible party. � abstract. � resource locator – in a case if any reference exists. the second level is extended level and we can not expect full implementation of this level for all catalogues (datasets or services). � constraints. � lineage. � conformity. geinformatics fce ctu 2008 61 iso 19115 for geoweb services orchestration � service type version – in a case of a service. � operation name – in a case of a service. � distributed computing platform – e.g. web services. � resource identifier – e.g. uri. � spatial resolution. inspire specifies other metadata elements, that can be available, but their usage by data (services) provides is disputable. the same problem is with the second level of metadata, where usage is based on provider decision. we can expect only following items: resource title, geographic extent of the resource, resource language, resource topic category, keyword, resource responsible party, abstract and in some cases temporal reference. that level of detail is not enough for the orchestration, but it can be used for a basic services selection. metadata according to iso 19115 core iso 19115 core is more detailed than inspire requirements and is going to be better applicable for orchestration. but we are still missing for example quality reports. items in the core are mandatory (m), conditional (c) or optional (o). � dataset title (m) � dataset reference date (m) � dataset responsible party (o) � geographic location of the dataset (by four coordinates or by geographic identifier) (c) � dataset language (m) � dataset character set (c) � dataset topic category (m) � abstract describing the dataset (m) � distribution format (o) � additional extent information for the dataset (vertical and temporal) (o) � spatial resolution of the dataset (o) � spatial representation type (o) � reference system (o) � lineage (o) � on-line resource (o) � metadata file identifier (o) � metadata standard name (o) � metadata standard version (o) geinformatics fce ctu 2008 62 iso 19115 for geoweb services orchestration � metadata language (c) � metadata character set (c) � metadata point of contact (m) � metadata date stamp (m) metadata according to dublin core dublin core is general standard and can be used for definition of own items, but we can not expect that providers will use such capabilities. they will probably use only simple metadata items list. � title � creator � subject � description � publisher � contributor � date � type � format � identifier � source � language � relation � coverage � rights metadata according to midas database completeness we have analysed midas database and we can probably expect same providers behaviour in the future. the following table categorised metadata items according to completeness in the midas database. midas system contains metadata about 3400 datasets. mandatory and conditional items were always filled (was controlled by the system). optional items were filled in a case, when list of options was available. very interesting is completeness of alternate title, temporal extent (date from), reference data and dataset usage. out of interest are quality elements (except lineage). geinformatics fce ctu 2008 63 iso 19115 for geoweb services orchestration completeness metadata items 80 – 100 % title, abstract, coordinate system for metadata, metadata update, spatial schema, lineage, horizontal spatial accuracy, update frequency, data structure, format, language, classification, direct coordinate system, responsible party. 60 – 80 % alternate title, temporal extent (date from), planar extent (by coordinates), reference data. 40 – 60 % dataset usage 20 – 40 % memo, planar extent (by description) 5 – 20 % abbreviated title, version, purpose of production, temporal extent (by description), metadata language, spatial coverage, scale, temporal extent (date to). < 5 % english title, english abstract, update date, fees, metadata update plan, vertical spatial accuracy, logical consistency, completeness, homogeneity, resolution, quality, vertical extent, distribution units, medium, indirect reference system, vertical reference system, features description table 6: completeness of the metadata items in the midas database comparison to iso 19115 orchestration minimal iso 19115 orchestration minimal inspire iso 19115 core dublin core midas* md resolution + – – li lineage/source + + + + dq completenesscommission – – – dq completenessomission – – – dq absoluteexternalpositionalaccuracy – – – +** dq griddeddatapositionalaccuracy – – – dq relativeinternalpositionalaccuracy – – – dq accuracyofatimemeasurement – – – dq temporalconsistency – – – dq temporalvalidity – – – dq thematicclassificationcorrectness – – – dq nonquantitativeattributeaccuracy – – – dq quantitativeattributeaccuracy – – – + ex boundingpolygon + + + + ex geographicboundingbox + + + + ex temporalextent + + + + ex verticalextent + + + spatialrepresentationtype – – – + language + + + + topiccategory + + + + md keywords + – + geinformatics fce ctu 2008 64 iso 19115 for geoweb services orchestration md gridspatialrepresentation – – – md vectorspatialrepresentation – – – +** md featurecataloguedescription – – – + md coveragedescription – – – md imagedescription – – – table 7: comparison to iso 19115 orchestration minimal * items completed over 60% has been included ** partly the following table shows percent of the items that will be probably included according to selected standard, directive or system. standard, directive, system percent of the iso 19115 orchestration minimal items available inspire 34 iso 19115 core 27 dublin core 31 midas 42 table 8: percent of the iso 19115 orchestration minimal items available conclusion results of the research are not so optimistic, because we can not expect in any potential case that metadata are enough detailed for the efficient orchestration. to build orchestras dynamically needs to use alternative ways, how to evaluate served geodata. according to results of our research, we have decided to use metadata for geodata, but not as only single source for geodata evaluation. we are preparing methodology how to deal with evaluation. basic principles of the methodology are summarised in the following points: � if it is possible use simple orchestras � do not base creating groups of similar services on metadata for geodata � use experts’ evaluation of the orchestras results to create groups of similar services � update groups of similar services according to new results evaluation � evaluate simple orchestras’ results as well if you are interested in the prepared methodology, please read the arcitle that will be published in the proceedings of the symposium gis ostrava 2009. geinformatics fce ctu 2008 65 iso 19115 for geoweb services orchestration references cagi. (2007). midas. 20012007. at http://gis.vsb.cz/midas/, [accessed 2 july 2007]. dcmi. (2007) dublin core element set v. 1.1. – reference description, online1, [accessed 12 april 2007]. inspire. (2007). dt metadata – draft implementing rules for metadata at online2, [accessed 12 april 2007]. iso/tc 211. (2003). iso/fdis 19115:2003. iso/tc 211 secretariat, oslo, norway, 152 p. růžička, j., kaszper, r. opět o metadatech v geoinformatice. proceedings 1. národńı kongres v česku – geoinformatika pro každého, may 29-31 2007, mikulov, czech republic, online3, [accessed 2 july 2007]. support the article is supported by grant agency of the czech republic gacr as a part of the project ga 205/07/0797 geoweb services orchestration. the article is supported by open source community as well. we have used open source projects geonetwork open source, wsco, apache tomcat, jetty, open office, gimp, dia, postgis, php, postgresql, apache http server, gnu/linux ubuntu, gnu/linux debian, x11, mysql, freefont and others for this article. 1 http://dublincore.org/documents/dces/ 2 http://www.ec-gis.org/inspire/reports/implementingrules/draftinspiremetadatairv2 20070 \ 202.pdf 3 http://mikadapress.com/prednasky/ruzicka.pdf geinformatics fce ctu 2008 66 http://gis.vsb.cz/midas/ http://dublincore.org/documents/dces/ http://www.ec-gis.org/inspire/reports/implementingrules/draftinspiremetadatairv2_20070202.pdf http://mikadapress.com/prednasky/ruzicka.pdf issues of gis data management issues of gis data management tomáš richta department of computer science and engineering faculty of electrical engineering, czech technical university in prague e-mail: richtt1@fel·cvut·cz keywords: gis, cad, object-orientation, data modelling, data management, database abstract the paper deals with current issues of spatial data modelling and management used by spatial management applications. as a case study for explaining the problem, we use comparison of two main groups of software tools covering this area gis and cad systems and the posibilities of their integration. studying its functionality, we have found two main problematic issues. the first of them is the density distribution characteristics of stored data according to described area. cad systems are oriented towards modeling individual man-made objects and structures with relatively high level of detail, so the data stored covers small areas with huge amount of information. on the other side gis applications maintain large-scale models of real world with significantly lower amount of detail. here the density distribution of data coverage is better balanced. so the combination of described different densities is the first problem. the second watched issue is the way of storing spatial data. while cad data are usually stored in individual files (like dxf, iges), gis data tend to be stored in files or realtional databases. the question we see is, if it is possible to store cad data along with gis data in the same database in spite of different distribution densities and different data models. our paper describes ways of solving this problem. motivation at the beginning of our work we started thinking about an information system capturing real world with the highest achievable level of detail. that means system with 1:1 model of real world entities. such a system must be able to describe visible and invisible properties of captured objects in the manner to be easily achievable and logically compounded both in computer memory and data warehouse. mainly we want to describe the data management background needed for such a system. two main categories of systems that partly solves our problem are gis and cad applications. the aim of both is to model the real world, but each one does this by its own way. gis applications are constructed for maintaining information in connection with its geographical location. with regard to wolrd scaling factor, gis data is reativelly well distributed in the space. cad data on the other side relate to only very small areas consuming megabytes of data. data density distribution in both systems is different. so the question is, which data structures and data management approaches use to obtain global-oriented information system like a gis with local-detailed information description like a cad systems. geinformatics fce ctu 2006 56 issues of gis data management previous work we tried to investigate whether someone is solving the problem of such a spatial management system design. few relevant papers concerning our topic could be thematically separated into these areas of interest: cad/gis integration, 3d gis data modelling, an object-oriented approach and 3d gis data management. in following chapters we try to explain the problems mentioned above introducing the ideas of some experts followed by our comments. gis/cad integration observing the situation in the area of cad and gis integration, we found it almost untouched. that could be partly caused by the complexity of solving such a problem, partly by the lack of interest on the side of gis and cad vendors. both of them have very expensive and broadly used systems, that ensure them good living. usually it is exactly because the data formats used in cad are so different and hard to transform, that using one system is the only choice. so all interoperability tendencies are against the profit of vendors, thus unwelcomed. maybe some development is done behind the curtain, which could be learned from recent google activities in the geospatial area. but we can only speculate about that. scientific papers are only describing many differences between those two worlds, which of course are doubtless. gis aspects [2,3] � landscape-level analysis and mapping � advanced information tools � mostly 2d modelling � database based � optimized for data retrieval � 1:5000 scale and below � constrained editing environment cad aspects [2,3] � object-level design and drafting � advanced drawing tools � 3d modelling � file based � optimized for data design � 1:40-5000 scale � unconstrained editing environment cad systems deal with large-scale models without maintenance of attributes and geographical coordinate systems. gis are on the other side are able to manage small-scale models with geinformatics fce ctu 2006 57 issues of gis data management attributes and a variety of different geographic coordinate systems. cad and gis share one major characteristic both deal with geometry [1]. cads usually store data in file format, giss more often maintain data permanently in databases. cad systems generally assume an orthogonal world, while gis systems deal with data sources based on the model of spherical world. also, all these different pieces of information are created and maintained by totally different organizations with different tools and different utilization goals [2]. as could be seen the integration process probably leads to use the cad for data capture, design and modelling, while gis for data management, analysis and visualizing. solution for an integrated cad/gis framework must start with the design of final data warehouse. the solution could be found in mapping both data into the neutral data model [2, 5]. so the first issue is to develop a proper data model that could then be integrated into designed spatial management system. now let us know the stage where the solving of this problem could be found. because of the common feature of cad and gis that is 3d representation, we will show only data models that deal with 3d representation of spatial data. 3d gis data model among the important 3d data models for gis applications belongs: molenaar’s fds, wang and gruen’s v3d [14], zlatanova’s sss [13], pilouk’s ten, shi et al.s oo3d [16], coors’s udm [15] and balovnev et als geotoolkit [17]. here we introduce a brief summary of their features. we don’t want to describe them in detail or compare each other, because it has been done previously. we just want to get an idea, how the problem is solved. all those mentioned models are vector and boundary based, which partition spatial objects as points, lines faces and bodies or similar geometric primitives. first one was the fds, which partitioned the space into non-overlapping objects and thus tried to ensure 1:1 relationships between the primitives and the spatial objects of same dimension, e.g. surface and face. fds describes the basic geometric elements: node, edge, arc and face and four spatial objects: point, line, surface and volume. as a first model it was broadly discussed and frequently extended like in v3d. the distinct feature of v3d model is, that the geometric information is combined with attribute information [14]. then the ten was introduced which includes node, arc, triangle and tetrahedron as the basic elements. the most important part in constructing a spatial object in the ten model is to decompose the object into a set of the composed tetrahedrons. the sss model is a further development of the fds model. compared to fds, the sss keeps the explicit relationships between body and face and eliminates the edge and arc object. on the other hand, the sss keeps the relationship of a geometric objects and attribute data such as texture. oo3d uses node, segment and triangle as basic elements [16]. udm and sss are quite similar. both don’t support the arc and the edge elements. because of this restriction only polyhedrons can be represented in the udm [15]. particular attention requires the geotoolkit, because it is not only a data model, but also an object-oriented geo-database kernel system for the support of 3d geological applications, which demonstrates the potential of object-oriented concept in 3d database. geotoolkit deals primarily with two basic notions: a spatialobject and a space (a collection of spatial objects). a spatial object is defined as a point set in the 3d euclidean space. object-oriented geinformatics fce ctu 2006 58 issues of gis data management method is a key nature of geotoolkit, which helps to construct the object hierarchy. the group gathers spatial objects of different types into a collection and then is treated as a single object. geotoolkit backbone is a class library for the efficient storage and retrieval of spatial objects within an object-oriented database. currently geotoolkit offers classes for representation and manipulation of simple (point, segment, triangle, tetrahedron) and complex (curve, surface, solid) 3d spatial objects. the application developed with geotoolkit simply inherits geometric functionality from geotoolkit, extending it with application-specific semantics [13, 17, 18]. surveying 3d data models, the paradigm of object-orientation frequently appeared. because it seems to be very important for gis researchers, we have to explain it briefly. an object-oriented approach mdittrich’s definition says that there are three types of object/orientation: structural object orientation any entity, independent of whatever complexity and structure, may be represented by exactly one object, no artificial decomposition into simpler parts due to technical restrictions should be necessary, operational object orientation operations on complex objects are possible without having to decompose the objects into a number of simple objects, behavioral object orientation a system must allow its objects to be accessed and modified only through a set of operations specific to an object type [7]. there are four main concepts covering object orientation. the first concept is the encapsulation. it means that the object or some group of objects (class) and the procedures (methods) defined on it are stored and managed together. to activate a procedure the program sends a message to an encapsulated data-procedures set, in the consequence of the procedure’s activity the set can send another message to another set, etc. the second concept is the inheritance. the inheritance is related to the class hierarchy. if we have a subdivision of a class, then the subclasses inherit from the data and methods. the third concept is the object identity. it means that despite different transformations the object’s identification should not change. there is a fourth concept, the so-called polymorphism. the word polymorphism we can interpret as different responses to the same message depending on the object in the address of the message. for example we can send a message: plot to the address a,b,c. if into the object a is encapsulated a procedure of a circle, in the object b that of an ellipse and in the object c that of a square, then depending on the addresses, the command plot will result a circle, an ellipse or a square [9]. as would be seen the object-oriented approach provide a number of mechanisms to model real objects in a natural way. 3d gis data management as a last issue we have to describe the 3d gis data management. going through the ideas of many experts, we have to notice, that the discussion about the 3d gis data management is concerned in comparing the relational and object database management systems. the relational data model is the most common one. it seems to be suitable for modelling commercial data for which humans may have the mental model of tables, such as bank accounts, telephone calls, etc. but it is not proper for modelling data that describe spatial phenomena. relational database management systems are suitable and successful for applications dealing geinformatics fce ctu 2006 59 issues of gis data management with weakly structured data, but they fail when they are used for applications of data with a complex structure. since the relational data model does not match the natural concepts humans have about spatial data, users must artificially transform their mental models into a restrictive set of non-spatial concepts. the object-oriented data model is built on the four basic concepts of abstraction: classification, generalization, association, and aggregation [7]. many researchers comply that the spatial information systems will benefit from the use of object-oriented database management systems in various ways: the architecture of a gis will become clearer such that the maintenance of gis software will be easier and its life cycle will be longer, programmers should not worry about aspects of the physical implementation of data [7, 8, 10, 11, 4]. object oriented databases have been portrayed as being the solution for complex applications such as geographical information systems. a further motivation for the use of an objectoriented approach to the production of such a system is therefore the expectation that the approach will result in a system which has a clean interface and is easier to maintain than an equivalent system built using conventional programming techniques [10]. developers of object-oriented gis systems comply that a new object-oriented database is to be developed, because there’s need to reengineer the database up against performance and others problems that will arise, when such a system will be tested [4]. gis and cad problem resulting from mentioned papers, we tried to make our own comments for described issues. each of current gis application is able to manage and display points, lines and polygons. that’s because gis data are usually stored as point, lines and polygons. let’s call them geometrical primitives. there’s no standard among gis data formats, but the most common file format is esri shape file. each shapefile contains information about locations of one type of geometrical primtives covering one thematical area and is called layer. this file is succeeded with database file, which contains attribute information about geomtrival primitives from shapefile. there is also file with information about connection between the shapefile primitives and database records, index file. gis applications are constructed to load these files and pile up the layers to produce final map. this approach is sometimes turned into storing all described information into relational database which structure is similar to the structure of shapefiles. the problem here lies in the fragmentation of stored information. points, lines and polygons representing objects are stored separatelly. and of course used data types are much simpler than modeled real world objects. definition of new data types is not supported in gis applications, because it is very problematic to change for example relational data structure. almost the same problem we could find in cad systems, buy here the situation starts to change. usual cad data are stored as points, lines and polygons. but some newest applications, especially in machine building industry, are able to let user define his own objects and compose complex structures from the simpler ones. but there’s no data model supporting storage of defined objects, so they must be saved as points, lines and polygons. geinformatics fce ctu 2006 60 issues of gis data management conclusion and future work now we can summarize the problem of the gis and cad integration. because of the different characteristics of the gis/cad worlds, firstly there’s need to decide for some suitable 3d data model, which could maintain complex and structured data types. this model also must be able to maintain the large-scale 3d models produced by cad as well as low-scale objects used by gis. object-oriented approach seems to be the proper way, by which this model could be developed, because it offers richer data structures and more intuitive representation of the real world objects. an example of basic data model could be geotoolkit. secondly, there’s need to prepare a system for maintaining the 3d data model. this system must be able to exchange data with different data sources including cad files. we suppose that this system has to be closely connected with an object-oriented database, which will maintain persistent objects. as a third part of such a system we have to develop application-specific data models for describing the real world objects of particular interest. in example, when we are concerned in modelling the cities, our data model has to describe buildings, streets and other city components. some of them are usually designed using cad systems an so it would be a great deal to be able use cad data for city modelling. here we introduce an example of data models describing the city problem. it describes the relations between the city parts. fig 1.: an example of the city data model the advantage of using object-oriented approach is that we can interactively modify our model, when it is necessary. because this model is stored by an object-oriented database, the data structure might be built exactly like you can see them in the model. there’s no need geinformatics fce ctu 2006 61 issues of gis data management to decompose this model into additional data structures because now we can store the whole object. in addition we can add a behavior to this object defining its methods. this way we can make digitalized abstractions of real world entities. as the last problem that is to be solved when we will be storing cad data in the 3d gis database is how to generalize and classify the high-scaled objects from cad when storing it into low-scaled objects in gis. an on the other hand how to restore the details of re-scaled objects stored in gis and be able to give them back into the cad to be redesigned. references 1. zlatanova s.: large-scale data integration an introduction to the challenges for cad and gis integration, directions magazine, july 10, 2004. 2. van oosterom p.: bridging the worlds of cad and gis, directions magazine, june 17, 2004. 3. zlatanova s., rahman a. a., pilouk m.: trends in 3d gis development, journal of geospatial engineering, vol. 4, no. 2, december 2002. 4. bodum l., sorensen e. m.: centre for 3d geoinformation towards a new concept for handling geoinformation, fig working week 2003, paris, april 2003. 5. weinstein d.: cross platform cad-gis integration: automating cad workflows and gis technologies to support structural inspection and decision support systems on bostons central artery project, gis-t 2004, march 29, 2004. 6. chance a., newell r.g., theriault d.g.: smalworld gis: an object-oriend gis issues and solutions, 2000, http://www.logis.ro/downloads/ 7. egenhofer m. j.: object-oriented modeling for gis, journal of the urban and regional information systems association 4 (2): 3-19, 1992. 8. kofler m.: r-trees for visualizing and organizing large 3d gis databases, tu graz, austria, 1998. 9. sarkozy f.: gis functions, periodica polytechnica ser. civ. eng. vol. 43, no. 1, pp.87-106, 1999. 10. garvey m., jackson m., roberts m.: an object-oriented gis, net.objectdays 2000. 11. li j., jing n., sun m.: spatial database techniques oriented to visualization in 3d gis, digital earth, june 2001. 12. van oosterom p., stoter j., quak w., zlatanova s.: the balance between geometry and topology, advances in spatial data handling, 10th international symposium on spatial data handling, springer-verlag, berlin, pp. 209-224 , april 2002. 13. zlatanova s.: 3d gis for urban development, tu delft, 2000. 14. zhou q., zhang w.: a preliminary review on 3-dimensional city model, asia gis 2003 conference, october 2003. geinformatics fce ctu 2006 62 http://www.logis.ro/downloads/ issues of gis data management 15. coors v.: 3d gis in networking environments, computer, environments and urban systems 27/4, special issue 3d cadastre, elsevier, 2003, issn 0198-9715, pp 345-357, april 2003. 16. shi w., yang b., li q.: an object-oriented data model for complex objects in threedimensional geographical information systems, int. j. of geographical information science, vol.17, no. 5, july-august 2003, 411-430. 17. balovnev o., breunig m., cremers a. b., shumilov s.: geotoolkit: opening the access to object-oriented geo-data stores, interoperating geographic information systems, boston: kluwer academic publishers, 1999. 18. breuning m., cremers a. b., seidemann r., shumilov s., siehl a.: integration of gocad with an object-oriented geo-database system, gocad meeting, nancy, france, june 1999. 19. bodum l.: design of a 3d virtual geographic interface for access to geoinformation in real time, corp 2004 and geomultimedia04, february 2004. geinformatics fce ctu 2006 63 ________________________________________________________________________________ geoinformatics ctu fce 2011 346 digital terrain model of the second military survey – part of the military training area brdy martina vichrova, vaclav cada university of west bohemia in pilsen, faculty of applied sciences, univerzitni 22, pilsen, czech republic vichrova@kma.zcu.cz, cada@kma.zcu.cz keywords: second military survey, hachure, spot height, hypsometry, digital terrain model. abstract: the second military survey in the territories of the former austro-hungarian monarchy was performed between 1806 and 1869. the territory of bohemia was surveyed from1842 to 1852 and moravia and silesia from 1836 to 1840. after detailed study of the lehmann´s theory of displaying the topographic landforms using hachure’s, it was detected that the hachure’s in the maps of the second military survey were created by means of the modified lehmann´s scale. the representation of landforms in maps of the second military survey was accomplished by spot heights represented mostly by points of geodetic control. the aim of this contribution is to propose and describe the methodology of creating the digital terrain model (dtm) from the second military survey hypsometry and to analyse its accuracy. a part of the map sheet (w_ii_11) of the second military survey, representing the long -standing military training area brdy, was chosen as a model area. the resulting dtm was compared with the recent reference digital ground model – dmr zabaged®. the conformity of terrain relief forms and elevation accuracy of the dtm derived from the second military survey hypsometry were also investigated. 1. hypsometry on the second military survey maps 1.1 the second military survey the second military survey (ordered by emperor francis ii) in the territories of the former austrian-hungarian monarchy progressed between 1806 and 1869. in the territories where the cadastral survey was completed (including bohemia, moravia and silesia), the outcomes were exploited for the military survey. reduced and generalised planimetric content from the cadastral maps and cadastral triangulation was used to outline the planimetric content of the second military survey. this assured an improved positional accuracy and better work economy. the fundamental characteristics of original and modified surveying technologies are introduced in table 1. the territory of bohemia was surveyed between 1842 and 1852 (267 handwritten colour sections 1:28 800) and moravia and silesia between 1836 and 1840 (146 handwritten colour sections 1:28 800), [1]. currently, the map originals are stored in the vienna military archive department of the austrian state archives. the second military survey provides a complex image of the czech republic before the peak of industrial and agricultural revolution. the map sheets of bohemia, moravia and silesia were completed within just 16 years only. such a quick process of mapping was possible thanks to changes in technology, especially by taking over the results of cadastral triangulation and ongoing or completed cadastral survey. the mapping work was effective, accelerated and reduced. the map sheets of bohemia, moravia and silesia were prepared at the time of the onset and expansion of industrial, transport and agricultural revolutions, in the time of establishing a civil society, the rise of capitalism and the wave of urbanization. all these influences and many others influenced the image of the country and the content of the maps of the second military survey. these maps still retain a great historical memory and are a valuable source of information for professional historians, archaeologists, geographers, landscape ecologists, planners, and more. 1.2 lehmann´s theory and hypsometry on the second military survey maps the saxon topographer johann georg lehmann had already been concerned with the terrain representation in maps and plans since the end of 18th century. he defined, described and unified his theory, which was subsequently published in the article: darstellung einer neuen teorie der bezeichnung der schiefen flächen im grundriß oder der situationszeichnung der berge, [2]. mailto:vichrova@kma.zcu.cz mailto:cada@kma.zcu.cz ________________________________________________________________________________ geoinformatics ctu fce 2011 347 table 1: fundamental characteristics of original and modified technologies of surveying in bohemia, moravia and silesia [1]. for 2d representation of terrain shapes he chose black hatches on a white background. the amount of black (the width of the hatches) on the surface was directly proportional to the value of the slope; the direction of hatches was analogous to the water flow on the surface of an elevated terrain shape. for the convenience of the topographer´s work and map reading, out of the slope from the map lehmann prepared a table with the ratios of black and white and the corresponding value of the slope between 0° – 45°, in increments of 5 °. the table allowed for the estimation of the slope angle in an assigned locality by a naked eye (see figure 1). elevated terrain forms were surveyed from the lowest place towards the highest. at first, the horizontal lines were staked out in the terrain and then drawn into the map from the foot towards the top. then values of the slope were recorded. afterwards, the hachures were drawn and auxiliary drawings (horizontal lines, values of the slope) were removed in the office. figure 1: ratios of black and white for the slope between 0°and 45° [2]. the hachures in the maps of the second military survey were created by means of the modified lehmann´s scale. they portray not only the direction of the maximum gradient but also the slope of the terrain. the slope is portrayed by the functionally dependent length and thickness of the hachures, and the distances between them are according to the precise scale. the representation of the landforms in the maps of the second military survey was accomplished by spot heights chosen mostly from points of geodetic control. ________________________________________________________________________________ geoinformatics ctu fce 2011 348 the heights in the map sheets that represent territories of bohemia, moravia and silesia were produced according to modified technology in units of viennese fathoms, correct to two decimal places (see figure 2). in the map sheets that represent a part of south bohemia (vitorazsko), which were made according to original technology, no spot height is available. figure 2: an example of the depiction of the spot height on the map [3]. 2. digital terrain model of the second military survey a part of the map sheet (w_ii_11) from the second military survey was selected as a model territory for creating the digital terrain model (dtm). this territory is situated in the surroundings of the ponds called “padrtske rybniky” and further to the northeast of them. the chosen model territory has been part of the military area brdy for a long time and is very sparsely populated for that reason. the territory was chosen deliberately because there are no significant changes in the terrain caused by human activities. the terrain is very diverse with some interesting morphological shapes. the area of the model territory is shown in figure 3. figure 3: model territory (a part of map sheet w_ii_11) for creating the dtm [4]. the methodology for creating the dtm from hypsometry of the second military survey comprises the four following steps: 1. identification of the horizontals and skeleton elements of terrain relief in existing hypsometry, 2. fragmentation of areas with a constant slope of terrain relief, 3. determination of elevations of horizontal lines using the least squares method, 4. creation of the digital terrain model. 2.1 identification of horizontals and skeleton elements of terrain relief in existing hypsometry the skeleton elements of the terrain for the whole of the model territory (the ridge lines, fall lines, valley lines and the valley lines as a water flow) were identified and digitalized in the map using sw kokes. next, the boundary lines of hachure layers, recognized as the horizontal lines (see figure 4) according to [5] were digitized. ________________________________________________________________________________ geoinformatics ctu fce 2011 349 figure 4: horizontals (green), skeleton elements (red), trigonometric stations (blue). 2.2 fragmentation of areas with constant slope of terrain relief the fragmentation of areas with a constant slope of terrain relief comprises the six following steps: 1. transfer of the map drawing from colour to grayscale expression, 2. adjustment and colour correction of the map drawing (with removal of some planimetric objects and descriptions from the map), 3. blurring of the map drawing, 4. setting of limits for grayscale intervals, see [6], 5. fragmentation of areas of constant slope, 6. choice of colour scale. at first, the map drawing was transferred from colour to grayscale. next the planimetric objects and descriptions were removed from the map and these objects were replaced by hachures similar to those in the vicinity of the object (see figure 5). descriptions represented in black colour would substantially skew the areas of constant slope. it was also necessary to take the colour of the background into account. using colour histograms the background colour was set as white and the colour of hachure as black. it is obvious that leaving the planimetric objects and descriptions in the map drawing and omitting the colour correction would reduce the accuracy of the areas of constant slope and resulting digital terrain model as well. figure 5: original map drawing (on the left), map drawing after removing the planimetric objects and descriptions (in the middle), areas of constant slope (on the right). 2.3 determination of elevations of the horizontal lines using the least squares method points on the borderlines of hachures (where value of the slope has changed) were identified on selected line elements of the terrain skeleton. between such points height differences on the ridge lines, fall lines and, where necessary, on the selected valley lines were calculated using a rectangular triangle. the valley lines passing the ravine were suitable for calculating the height difference, because they respect the direction of hachure as parts of fall lines. the valley lines passing through the narrow valley, often with water flow and rocky watersides, were unsuitable for calculating the height differences. in view of the potential for discoloration or other colour changes of map drawings and background papers of old maps, it was very important to take into account the relative relations between hachure layers when ________________________________________________________________________________ geoinformatics ctu fce 2011 350 determining the slope values. hereafter it was very important to take into account presence of rocks and detritus fields, because these objects disturbed the areas of constant slope. considering the local changes of hachures caused by removing some planimetric objects and descriptions from maps, it was very important to consistently follow the falling and climbing of the terrain, especially on the top parts of mountain ridges. in the next step the partial height differences were calculated between the points on the lines of the terrain skeleton on the boundary of hachure layers. then the height differences between nodal points of the network (connected lines of the terrain skeleton) were calculated by summarising partial height differences. the complete network was adjusted by a minimum square method so that determination of the nodal point heights was not independent of the calculation mode. 133 equations were generated for adjustment of the network. the network contained 10 trigonometric stations with fixed value of their heights. 81 unknowns (heights of the nodal points) and especially the corrections (vi) for each height difference were solved for the calculation of the mean error of adjusted height difference according to (equation 1): m r vv m t v 84.40 , (equation 1) where ldhav is the vector of height difference corrections, vt is the transposed vector, a is the matrix of determined height difference directions, laaadh tt 1 is the vector of corrections of the approximate heights of nodal points and l is the vector of height differences of nodal points. the partial height differences summarized between the layers of hachures (on the border lines of the constant slope areas) were adjusted proportionally to the values of partial height differences. then (proportionally to the value of slope) the height was matched to each horizontal line or a part of horizontal line that intersects the line of terrain skeleton in a concrete area. in this way, horizontal lines, parts of horizontal lines with determined heights and the heights of 10 trigonometric stations became a sufficient base for creating the digital terrain model of the second military survey. 2.4 creating the digital terrain model (dtm) horizontal lines and parts of horizontal lines with calculated heights were used for creation of the digital terrain model derived from the second military survey maps. heights displayed at 10 trigonometric stations were applied as well. the digital terrain model derived from the second military survey maps was computed by means of sw microstation v8i – inroads suite. at first the triangulated irregular network (tin) was generated and edited and then the digital terrain model (dtm) was created (see figure 6). figure 6: digital terrain model of the second military survey [4]. ________________________________________________________________________________ geoinformatics ctu fce 2011 351 3. comparing the digital terrain model of the second military survey with the reference dmr zabaged the digital terrain model of the second military survey was compared with the recent reference digital ground model – dmr zabaged®. the land survey office in prague provided the reference data dmr zabaged for the model territory (3d spatial contour lines with basic interval i = 5m, format *.dgn). according to [11], the accuracy of dmr zabaged hypsometry (1994 2000) is defined with the mean error of height 0.7 – 1.5m in the uncovered terrain, 1 – 2m in the built-up area and 2 – 5m in the continuous forested area; and the dmr zabaged improved hypsometry (since 2005) has the same parameters, but with more details. more information about accuracy of dmr zabaged is available in [7]. using dmr zabaged hypsometry data, the reference digital terrain model in sw microstation v8i – inroads suite was derived as a generalized and edited triangulated irregular network (tin). when comparing the reference dmr zabaged and digital terrain model of the second military survey, it was established that both models are identical in basic forms. it is possible to see some differences in details caused by the different scale of both products (zabaged 1:10 000, the second military survey 1:28 800) and the different representations of hypsometry on the maps. especially the rock forms at top of elevations and terrain steps on the hillsides are represented in the maps of the second military survey only schematically by hachures according to the modified lehmann´s scale and often by a map symbol. the dtm of the second military survey and the dmr zabaged are practically identical on the hillsides and in valleys as far as terrain forms are concerned. the differential digital terrain model (dtm of the second military survey minus dmr zabaged) was created using sw microstation v8i – inroads suite. the height differences were displayed by means of colour scale with 3m (see figure 7). outstanding local extremes are squared off with brown lines and are numbered: 1. local extremes caused by the positional error and especially by differences in terrain forms, 2. local extremes on the top part of the elevation caused by the incidence of rocks, 3. local extremes in the valley; it was very difficult to identify extremes from horizontal lines (the boundary lines of hachures) and to determine the slope on the lines of the terrain skeleton with sufficient accuracy because valley lines were unsuitable for such a task, 4. local extremes in places where the wrong value of the slope was determined. probably hachures for other slopes were drawn in this place, or the wrong value of the slope was read out in the course of processing. it is evident that the reasons of presence of local extremes could combine. figure 7: differential digital model – extremes [4] (black rectangle the accuracy of the dmr zabaged for continuously forested area according to [8]). ________________________________________________________________________________ geoinformatics ctu fce 2011 352 figure 8: areas of differential digital model for corresponding height interval in square metres [4]. using the differential digital model, total areas for each interval of height difference were computed. the generation of the graph followed (see figure 8). the maximum value of areas belong to intervals (-6; -3) and (-3; 0). the existence of a systematic error in this data set is evident. in addition, figure 8 illustrates that the set of values is confor mal with normal distribution. 4. conclusion the second military survey maps are still very valuable and unique sources of information. they were made by unified technology using the uniform cartographic means of expression and uniform scale 1:28 800 suitable for studies of the landscape. the second military survey maps provide a complex image of the countryside before the peak of the industrial and agricultural revolutions at a time when the country was not so much influenced by human activities. the described methodology of creating the digital terrain model of the second military survey makes it possible to create a digital terrain model of original relief of the landscape dating back to the time of the completion of the respective map sheets (the first half of the 19th century). the method of creating the dtm of the second military survey allowed us to obtain a digital model of the “previous landscape relief” anywhere in the czech republic that can be reliably used by many experts including geographers, landscape ecologists, historians and archaeologists. this model can be exploited in projects of revitalization of areas significantly affected by human activities such as those affected by surface mining, currently flooded areas or built-up areas, for landscape planning, implementation of landscape treatment and for landscape protection. this new method of creating the dtm of the second military survey can be also used in territories where the terrain was represented only by hachures according to the modified lehmann´s scale and no heights of triangulation stations are at disposal. in such a case it is necessary to supplement the heights from other available sources (e.g. database of cadastral triangulation, spot heights in maps of the third military survey or dataz – database of trigonometric a densification points http://dataz.cuzk.cz/). in the case of an insufficient number of trigonometric points in the area of interest it would be necessary to determine identical points in the terrain which could be considered as positionand heightinvariant since the time of creating the second military survey maps. 5. acknowledgements this contribution was supported by the european regional development fund (erdf), project “ntis new technologies for information society”, european centre of excellence, cz.1.05/1.1.00/02.00ř0. 6. references [1] vichrova, m., cada, v.: altimetry on the second military survey maps in the territories of bohemia, moravia and silesia. the 5th international workshop on digital approaches in cartographic heritage – proceedings, vienna: resarch group cartography, institute for geoinformation and cartography vienna, tu vienna. cd-rom, 604 618. http://dataz.cuzk.cz/ ________________________________________________________________________________ geoinformatics ctu fce 2011 353 [2] lehmann, j. g.: anweisung zum richtigen erkennen und genauen abbilden der erd-oberfläche in topographischen karten und situations-planen. dresden, publisher arnold, 1812. 54. [3] map sheet of the second military survey – w_ii_11. österreichisches staatsarchiv – kriegsarchiv wien, kartensammlung. © 2nd military survey, austrian state archive/military archive, vienna. © laborator geoinformatiky, univerzita j.e. purkyne – http://www.geolab.cz. © ministerstvo zivotniho prosteedi cr – http://www.env.cz. [4] vichrova, m.: rekonstrukce digitalniho modelu terenu druheho vojenskeho mapovani (frantiskova). dissertation thesis: university of west bohemia in pilsen, faculty of applied sciences department of mathematics. pilsen. [5] lehmann, j. g.: darstellung einer neuen teorie der bezeichnung der schiefen flächen im grundriß oder der situationszeichnung der berge. leipzig, publisher fleischer, 1799. [6] muster-blätter für die darstellung des terrains in militärischen aufnahms-plänen. zum gebrauche der arméeschulen, auf befehl und unter der leitung des k. k. österreichischen generalquartiermeisterstabs entworfen und mit dessen hoher bewilligung herausgegeben (1831 – 1840). österreichisches staatsarchiv, kriegsarchiv wien. kartenund plansammlung, sign. kviia42 e. [7] brazdil, k.: projekt tvorby noveho vyskopisu uzemí ceske republiky. geodeticky a kartograficky obzor 55 (97), 7 (2009), 145 – 151. [8] sima, j.: pruzkum absolutni polohove presnosti ortofotografickeho zobrazeni celeho uzemi ceske republiky s rozlisenim 0,50, 0,25 resp. 0,20 m v uzemi na zapadoceske univerzite v plzni. geodeticky a kartograficky obzor 55 (97). 9.(2009), 214 – 220. dynamic object-oriented geospatial modeling tomáš richta, martin hrubý department of intelligent systems faculty of information technology brno university of technology xricht16@stud.fit.vutbr.cz, hrubym@fit.vut.cz keywords: dynamic, object-oriented, modeling, geospatial, methodology, devs, gis, database abstract published literature about moving objects (mo) simplifies the problem to the representation and storage of moving points, moving lines, or moving regions. the main insufficiency of this approach is lack of mo inner structure and dynamics modeling – the autonomy of moving agent. this paper describes basics of the object-oriented geospatial methodology for modeling complex systems consisting of agents, which move within spatial environment. the main idea is that during the agent movement, different kinds of connections with other moving or stationary objects are established or disposed, based on some spatial constraint satisfaction or nonfulfilment respectively. the methodology is constructed with regard to following two main conditions – 1) the inner behavior of agents should be represented by any formalism, e.g. petri net, finite state machine, etc., and 2) the spatial characteristic of environment should be supplied by any information system, that is able to store defined set of spatial types, and support defined set of spatial operations. finally, the methodology is demonstrated on simple simulation model of tram transportation system. motivation this paper outlines the methodology for modeling objects moving within the spatial environment, while establishing and disposing connections with other moving or stationary objects. movement of objects is constrained by the environment, and the connection between two objects (moving, or stationary one) is established when some predefined spatial constraint is satisfied, and disposed when the constraint is no longer fulfilled. for example, moving agents could be vehicles, that move within the city. when each vehicle reaches some distance range to the previous one, it must slower its movement, or even stop to avoid the collision. there are also many cameras mounted on street lamps, that monitor the movement of vehicles. this could be modeled as a system where objects move within some predefined path, and geinformatics fce ctu 2009 29 richta t., hrubý m.: dynamic object-oriented geospatial modeling establishes or disposes connections with the other ones, which is triggered by the achievement of some specific distance to it. there also exist connections between stationary objects, and moving ones, triggered by the presence of the moving ones within the visibility range of those stationary. the idea of this paper is to define the methodology for such a system modeling. in this motivation section this idea is discussed in more detail. in the next related work section, the other important ideas, that are important for the methodology are briefly introduced. in the contribution section the main contribution of this paper is described, and the methodology outlined. in the last section the specific problem is solved using presented methodology. spatially embedded networks the important observation concerning the moving object positions, is that the movement of objects is usually not free, but constrained by some predefined paths (e.g. roads, streets, rails, pavements, passages, stairs, doors, etc.) that connects some places (crossings, stations, buildings, squares, etc.). both paths and places form some kind of spatially embedded network. this observation allows to abstract form the classical notion of spatial position, defined as a function p : ω → rn, where ω is universe of objects, and n is the dimension of the space. position specification becomes even more complicated, concerning the geospatial position, which includes the problem of projecting the earth surface to some regular grid. the idea of abstracting the environment to the form of spatially embedded network was first introduced by wolfson et al. [1], discussed by jensen [2], and formalized by guting [3]. spatial network is modeled as a graph g = (v,e), with set of vertexes v and set of edges e ⊆ v ×v . the set of possible positions of moving object within such a graph g is then defined as [3] pos(g) = v ∪ (e × (0, 1)). it means that the agent is positioned either on some vertex v ∈ v or on some relative position on the edge defined as tuple (e,x), where e ∈ e,x ∈ (0, 1). position of objects then becomes studied from the topological point of view. but if there is a mapping from v to set of point features, and from e to set of linear features, it is possible to compute the precise geospatial position of objects, using some geographic information system (gis). dynamic geospatial networking the main idea of this paper is to define the principles and rules of dynamic connection establishing and disposal among moving, and also stationary objects, based on some spatial constraint satisfaction or nonfulfilment – dynamic geospatial networking. following example of networking assumes simple spatial constraint to be the visibility of objects. based on that, connections are established when objects see each other, and disposed, when they do not. schema of the example is shown in fig. 1, where three moving objects (e.g. vehicles) are following predefined path, being observed by two watchtowers a and b. when each vehicle reaches the visibility range of watchtower, the connection between the vehicle and stationary object is established. there are also connections between moving vehicles, that see each other, and also between two watchtowers for the same reason. geinformatics fce ctu 2009 30 richta t., hrubý m.: dynamic object-oriented geospatial modeling figure 1: geospatial networking example the scene could be interpreted as following. the vehicle x is in the range of watchtower b and right now enters the range of watchtower a, because it gets beyond the obstacle k. it also sees the vehicle y, but does not see the vehicle z, because of the obstacle j. the vehicle y sees both other vehicles, and is monitored by the watchtower a, and no the b, because of j. finally the vehicle z is monitored by the watchtower b, and sees the vehicle y. related work geographic information systems geographic information system (gis) is system capable of storing and retrieval of spatially characterized data. the study of gis cover many disciplines, like spatial data capturing, geospatial analysis, cartographic visualization, etc. for the purposes of discussed methodology, the gis should provide the spatial data back end for geospatial analysis computations – visibility, signal propagation, etc. from this point of view the following functionality of gis is important: � storage of basic spatial data types (points, lines, polygons) � basic spatial operations (length, distance, ...) � advanced analytical operations (visibility, signal propagation, ...) � well defined interface or language for requirements (queries) formulation regarding the coupling of different types of models, some ideas concerning interaction between gis and other models exist. for example brandmeyer and karmi isolated five levels of environmental models coupling [4]. those levels could be thought of as levels of maturity of gis and other systems integration. the described methodology uses gis as outer resource of information and defines strict interface for simulation model and gis communication. this geinformatics fce ctu 2009 31 richta t., hrubý m.: dynamic object-oriented geospatial modeling approach is similar to loose coupling level defined in [4]. it means that all necessary spatial queries are immediately sent through the interface to gis, and resulting data is returned back. there is no integration between those two models, so any gis satisfying defined criteria could be used. moving objects databases moving objects represent research area concerned on the computational representation of spatially or geospatially embedded objects, that change their position over time. moving objects cover broad research area, including popular terms like global positioning systems (gps), location based services (lbs), and many others. for the purposes of this paper the problem is reduced to a moving objects databases, more precisely to moving objects in networks databases. this research area covers the spatio-temporal modeling and algebras [5], [6], [3], implementation issues of the physical data models [2], efficient querying [7], [8], and indexing techniques [9], [10], [11], and database benchmarking [12]. guting et al. described the algebra of spatio-temporal predicates [5], [6], including the type system formal specification, definition of spatial and spatio-temporal predicates with temporal aggregation, to the querying the developments in stsql – spatio-temporal extension of structured query lanuage (sql). this work laid the basics of spatio-temporal databases, and further was extended by guting, almeida, and ding, to cover the area of moving objects in spatial networks [3]. discrete event system specification research of moving objects databases reduces the moving object to be either moving point, moving line, or moving region. the main idea of this paper is to enhance the model with moving objects autonomous behavior. the important formalism that encapsulates the inner behavior of agents, provides the means for its temporality, and defines clear interface for its incoming and outgoing communication is called discrete event system specification (devs), and was introduced by ziegler in [13], [14]. a devs basic component, also called atomic, is defined as a structure [13] dev s = 〈x,y,s,δ,λ,ta〉 where x is the set of external input events, y is the set of output events, s is the set of sequential (partial) states, λ : s → y is the output function, ta : s → r+0,∞ is the time advance function, δ is a transition function consisting of two parts, 1) the internal transition function geinformatics fce ctu 2009 32 richta t., hrubý m.: dynamic object-oriented geospatial modeling δint : s → s, 2) and the external transition function δext : q×x → s, where q is the total state of dev s given by q = {(s,e)|s ∈ s ∧ 0 ≤ e ≤ ta(s)}. concepcion and ziegler further extended the model to provide for hierarchical modeling by the possibility to hierarchically combine atomic devses [15], [14]. coupled devs is a structure that consists of a number of other coupled or atomic devses, and formally it is defined as cdev s = 〈x,y,d,{md},cxx,cyx,cyy,select〉 where x is the set of input events, y is the set of output events, d is set of component references, where ∀d ∈ d : md ∈{dev s,cdev s}, cxx ⊆ x × ⋃ i∈d xi is the set of external input couplings, cyx ⊆ ⋃ i∈d yi × ⋃ i∈d xi is set of internal couplings, cyy : ⋃ i∈d yi → y φ is external output coupling function, select : 2d → d is the tie-breaking function, that solves the problem of simlutaneous events. structure of the input and output sets of devs and cdevs could also be refined to provide the notion of incoming and outgoing ports for events as following [14] x = {(p,v)|p ∈ inports,v ∈ xp}, where inports is the set of input ports, and xp ⊆ x y = {(p,v)|p ∈ outports,v ∈ yp}, where outports is the set of output ports, and yp ⊆ y . another important extension to devs formalism had been done by barros [16], who extended the definition of devs formalism to support changes in structure. barros defined the dsdn – dynamic structure devs network as [16], [14] dsdn∆ = 〈x∆,y∆,χ,mχ〉 where ∆ is the dsdn name, x∆ is the input event set of dsdn, y∆ is the output event set of dsdn, geinformatics fce ctu 2009 33 richta t., hrubý m.: dynamic object-oriented geospatial modeling χ is dsdn network executive name, mχ is model of χ, which is defined as mχ = 〈xχ,sχ,yχ,δintχ,δextχ,λχ, taχ〉 with sequential state sχ ∈ sχ defined as sχ = 〈dχ,{m χ i },{i χ i },{z χ i,j},select χ,v χ〉 where dχ is the set of components of network, where χ 6∈ dχ, m χ i = 〈xi,si,yi,δinti,δexti,λi, tai〉 is model of component i, for all i ∈ d χ, i χ i is the set of influencers of i, for all i ∈ d χ ∪{χ, ∆}, where i 6∈ iχi for all i ∈ d ∪{χ, ∆}, z χ i,j is i-to-j translation function, for all j ∈ ii, and z χ ∆,j : x∆ → xj, z χ i,∆ : yi → y∆, z χ i,j : yi → xj, z χ k,χ(y) 6= φ ⇒ z χ k,j(y) = φ, for k ∈ dχ ∪{∆}, and for all j ∈ iχk −−{χ}, with φ as null event, selectχ : π → dχ ∪{χ}, is select function, where π = 2d χ∪{χ} −−{}, selectχ(a) ∈ a, v χ are other state variables not defined before. sχ is called structure, and any change in described variables is called change in structure. filippi and bisambiglia published a dsdn modification that should provide for modeling spatially embedded network of modified atomic components called point-devs, that change their position within a spatial environment [17]. this approach is similar to celular-devs, and is used to simulate fire propagation in natural environment. atomic devs reduced to have the partial state defined as tuple (vd,e), where vd is displacement vector, and e is local space property of point-devs. this work is similar to the presented methodology, but not usable as solution, because it defines only the position to be the partial state of devs. contribution moving objects devs modification geinformatics fce ctu 2009 34 richta t., hrubý m.: dynamic object-oriented geospatial modeling for the purpose of presented methodology, a simple extension of atomic devs component called ndevs – networking-devs, which is defined as structure ndev s = 〈x,y,s,δ,λ,ta,γ〉 where s,δ,λ,ta remains the same as in atomic devs, x = {(p,v)|p ∈ pin,v ∈ xp} is the set of input ports and values, y = {(p,v)|p ∈ pout,v ∈ yp} is the set of output ports and values, γ : pin ∪pout → l∪ � is annotation function, where l is language for networking rules definition, and � is empty word. the extension of coupled component is similar to the atomic one, so the following structure defines ncdevs – networkingcoupled-devs ncdev s = 〈x,y,d,{md},cxx,cyx,cyy,select,γ〉 where d,md,cxx,cyx,cyy, and select are the same as in cdevs, ∀d ∈ d : md ∈{ndev s,ncdev s}, x = {(p,v)|p ∈ pin,v ∈ xp} is the set of input ports and values, y = {(p,v)|p ∈ pout,v ∈ yp} is the set of output ports and values, γ : pin ∪pout → l∪ � is annotation function, where l is language for networking rules definition, and � is empty word. dynamic changes of the structure of network, could be modeled as dsdn, with following modifications mχ = 〈xχ,sχ,yχ,δintχ,δextχ,λχ, taχ,ψχ〉 where sχ ∈ sχ = 〈dχ,{m χ i },{i χ i },{z χ i,j},select χ,v χ〉, ∀i ∈ dχ : mχi ∈{ndev s,ncdev s}, and ψχ is evaluation function defined as ψχ : ⋃ i,j∈dχ,i 6=j si ×pi ×l×sj ×pj ×l → bool where ∀x ∈ dχ : px = pinx ∪poutx, geinformatics fce ctu 2009 35 richta t., hrubý m.: dynamic object-oriented geospatial modeling l is language for networking rules definition. during the model evolution, right after each internal transition function δintχ execution, the system evaluates the function ψχ on all components i,j ∈ dχ, where i 6= j. this modifies the results given by zi,j function of sχ to z ′ i,j, based on the evaluation of annotations of ports px ∈ px,x ∈ dχ given by the function γx,x ∈ dχ, within the context of present state sx ∈ sx,x ∈ dχ of each component. the modification is given by following rule ψχ(si,pi,γi(pi),sj,pj,γj(pj)) ⇒ (yi,xj) ∈{z ′ i,j} it is necessary to state, that the presented modifications of devs does not extend its modeling capabilities, which could be formally proved, but this proof is not part of this paper. dynamic geospatial modeling methodology the main idea of developed methodology is to define basic rules for transforming from the realworld situation to the dynamic geospatial networking system. because the proper definition of whole methodology is too complex, only the simple outline is introduced. the transformation should consist of following steps: 1. each object of the system is described as ndevs, or ncdevs d, using following rules (a) the set of states is defined as sd = sbas ×p ×r+0,∞, where sbas is the set of basic states of the object, and w = v0e1v1e2v2...ekvk,ei = (vi−1,vi), 1 ≤ i ≤ k is the path the object is following (b) each port of d is annotated with string that specifies the the set of rules rp of port pd, defining its ability to connect to the other ports within the system, the rule r ∈ rp = (cs,co), where cs is the set of spatial constraints (e.g. visibility, distance less than, etc.), and co is the set of other constraints (e.g. name of the port, etc.) (c) the object containing multiple ndevs, is coupled into ncdevs, it is necessary to specify the rules on ncdevs according to the rules on ports, that are connected to them 2. all objects are then supplied to the network executive χ, that solves the problem of dynamic networking by the following sequence (a) if the state of any of the ndevs d ∈ d changes (b) ∀pd ∈ pind ∪poutd,∀px ∈ ⋃ x∈dχ pinx ∪poutx the ψχ function is evaluated (c) based on the result of ψχ the zi,j function of χ is modified let’s consider the tram transportation system as a case study of described methodology. model of such a system consists of rail network and moving trams. rail network consists of places (platforms and switches), and transitions (rail segments). each platform is connected with two rail segments – the one by that it is achievable, and the other by which it could be leaved. each platform is considered to be a part of some station object. rail segments connect two places, and each switch has one incoming rail segment and two outgoing rail segments. spatial environment is then constructed as a network of transitions and places. geinformatics fce ctu 2009 36 richta t., hrubý m.: dynamic object-oriented geospatial modeling trams are moving agents, that change their position within this network in time, and make decisions at the switches based on the assigned travel schedule. switches and platforms are stationary objects, that mediate the environment connections. each place is mapped to the point feature, and each transition is mapped to line feature within some gis. let’s consider following real-world situation – in prague, there is station malostranská, the scheme of this station is shown in fig. 2. figure 2: malostranská station this schema could be interpreted as following – tram t1 stopped on platform p2, and the the switch s3 indicates, that t1 is heading to čech̊uv most. tram t2 is also heading to čech̊uv most, and at the moment it blocks t1’s movement. tram t3 is coming from hradčanská, and the switch s1 indicates, that it can go further to the p1 on malostranská. more formally – tram is a vehicle, that moves along the rails within predefined path w = v0e1v1e2v2...ekvk,ei = (vi−1,vi), 1 ≤ i ≤ k, containing sequence of vertexes vi ∈ v and edges ei ∈ e. actual heading of the tram is given by the next element of p, and actual position as p ∈ pos(g), defined previously. tram is modeled as ndevs tram = 〈x,y,s,δ,λ,ta,γ〉 with following values x = {(?go,true), (?stop,true)} y = {(!next,p ∈ pos(g)), (!position,p ∈ pos(g))} s = {(d,σ)|d ∈ w ×{stopped,running},σ ∈ r+0,∞},s0 = ((v0,stopped),∞) ∀s ∈ s : ta(s) = σ δext((((vi,stopped),σ), te),go) = ((vi,running),σ) δext((((vi,stopped),σ), te),stop) = ((vi,stopped),σ −−te) δext((((vi,running),σ), te),go) = ((vi,running),σ −−te) geinformatics fce ctu 2009 37 richta t., hrubý m.: dynamic object-oriented geospatial modeling δext((((vi,running),σ), te),stop) = ((vi,stopped),σ) δext(((((ei,x),running),σ), te),go) = (((ei,x),running),σ −−te) δext(((((ei,x),running),σ), te),stop) = (((ei,x),stopped),σ) δext(((((ei,x),stopped),σ), te),go) = (((ei,x),running),σ) δext(((((ei,x),stopped),σ), te),stop) = (((ei,x),stopped),σ −−te) δint((vi,running),σ) = (((ei+1, 0),running),σ) δint(((ei, 0 < x < 1),running),σ) = (((ei,x + vte),running),σ) δint(((ei,x ≥ 1),running),σ) = ((vi+1,stopped),σ) δint((vi,stopped),σ) = ((vi,running),σ) λ((vi,stopped),σ) = (!position,vi) λ((vi,stopped),σ) = (!next, (ei+1, 0)) λ((vi,running),σ) = (!position,vi) λ((vi,running),σ) = (!next, (ei+1, 0)) λ(((ei,x),stopped,σ) = (!position, (ei,x)) λ(((ei,x),stopped,σ) = (!next, (vi+1)) λ(((ei,x),running,σ) = (!position, (ei,x)) λ(((ei,x),running,σ) = (!next, (vi+1)) γ(?stop) = ’select p from alloutports where p.name = ’stop’’ γ(?go) = ’select p from alloutports where p.name =’go’’ γ(!position) = ’select p from allinports where p.name = ’position’’ γ(!next) = ’select p from allinports where p.name = ’next’’ let’s assume that the tram has two sensors (transmitter/receiver), that are able to broadcast and receive information to and from other sensors in the system. the transmitter is on the back of tram, and the receiver in the front of the vehicle. tram could then indicate a proximity of the other by receiving the signal from broadcaster, resulting in tram stop to avoid the collision. transmitter could be modeled as following ndevs component transmitter = 〈x,y,s,δ,λ,ta,γ〉 noindent with following values x = {} y = {(!signal,bool)} geinformatics fce ctu 2009 38 richta t., hrubý m.: dynamic object-oriented geospatial modeling s = {(d,σ)|d ∈{v}×{nosignal,signal},σ ∈ r+0,∞},s0 = (nosignal, 0) λ((v,nosignal),σ) = (v,signal) λ((v,signal),σ) = (v,nosignal) γ(!signal) = ’select p from allinports where p.name = ’signal’ and p.distance(v) < delta’ and receiver as following ndevs component receiver = 〈x,y,s,δ,λ,ta,γ〉 with following values x = {(?signal,true)} y = {(!stop,true), (!go,true)} s = {(d,σ)|d ∈{v}×{waiting,receiving},σ ∈ r+0,∞},s0 = (waiting,∞) δext((((v,waiting),σ), te),signal) = ((v,receiving),σ) δext((((v,receiving),σ), te),signal) = ((v,receiving),σ −−te) δext((((v,receiving),σ), te),φ) = ((v,waiting),∞) λ((v,receiving),σ) = (!stop,true) λ((v,waiting),σ) = (!go,true) γ(?signal) = ’select p from alloutports where p.name = ’signal’ and p.distance(v) < delta’ γ(!stop) = ’select p from allinports where p.name = ’stop’’ γ(!go) = ’select p from allinports where p.name = ’go’’ tram with sensors is then following ncdevs model tramwithsensors = 〈x,y,d,{mi},cxx,cyx,cyy,select,γ〉 with following values x = {(?signal,true), (?go,true)}, y = {(!next,p ∈ pos(g)), (!position,p ∈ pos(g))}, d = {tram,transmitter,receiver}, mtram = tram,mtransmitter = transmitter,mreceiver = receiver cxx = {(?signal,receiver.?signal), (?go,tram.?go), ()} cyx = {(receiver.!go,tram.!go), (receiver.!stop,tram.!stop)} cyy(transmitter.!signal) =!signal geinformatics fce ctu 2009 39 richta t., hrubý m.: dynamic object-oriented geospatial modeling cyy(tram.!position) =!position cyy(tram.!next) =!next γ(?signal) = ’select p from alloutports where p.name = ’signal’ and p.distance(v) < delta’ γ(?go) = ’select p from alloutports where p.name = ’go’ and p.distance(v) < delta’ γ(!position) = ’select p from allinports where p.name = ’position’ and p.distance(v) < delta’ γ(!next) = ’select p from allinports where p.name = ’next’ and p.distance(v) < delta’ it could be seen, that γ functions of ncdevs were defined according to connected ndevs components. schematic diagram of described ndevs and ncdevs models, and two more, are shown in fig.3., where the dynamically established and disposed connections are drawn using dashed arrows. future work spatio-temporal constraint language for the formalization of ports rules definitions, it is necessary to define the semantics of the spatio-temporal constraint language l, that is used for rules definition. so far the sql-like notation was used, which probably remains the same, but it is necessary to precisely define the predicates, that might be used within the rules definition. agent dimensions one of the other problems is the size of moving agents. in presented case study, this problem could be easily solved using the space manager, which asks the gis database for the information about the ordering of vehicles (based on their location and direction). but in future, the size of agents could also play another important roles, so it might be included to the model. model persistence the problem of the model persistence includes the storage of agents inner states, actual connections, and other values, including the history of network evolution. here it is possible to build on the moving objects databases fundamentals also briefly introduced in this paper. the area could also introduce the problem of query optimizations, that have not been addressed yet. geinformatics fce ctu 2009 40 richta t., hrubý m.: dynamic object-oriented geospatial modeling figure 3: ndevs and ncdevs models conclusions this paper introduced the outline of the dynamic geospatial networking methodology, which is based on the devs formalism extension, called networking-devs, and networkingcoupleddevs. introduced extensions does not improve the modeling power of classical devs formalism, but only defines the different type of its usage. the methodology covers basic rules for the transformation from real-world moving objects, into networking-devs models. it also defines the rules for dynamic connecting of devs models, based on some geospatial constraint satisfaction, which is called geospatial networking. connections are established as devs ports couplings, based on the rules defined in ports annotations. the dynamic begeinformatics fce ctu 2009 41 richta t., hrubý m.: dynamic object-oriented geospatial modeling havior of the model is formalized by modified dsdn structure, with network executive. the methodology assumes that the network executive is connected with some gis, that it uses for geospatial analytical computations. based on the results of those analyzes the connections are established, and disposed. presented methodology was demonstrated on test case concerning the tram traffic model. references 1. ouri wolfson, bo xu, sam chamberlain, and liqin jiang. moving objects databases: issues and solutions. in ssdbm, pages 111-122, 1998. 2. christian s. jensen, torben bach pedersen, laurynas speicys, and igor timko. data modeling for mobile services in the real world. in sstd, pages 1-9, 2003. 3. ralf hartmut guting, victor teixeira de almeida, and zhiming ding. modeling and querying moving objects in networks. the vldb journal, 15(2):165-190, 2006. 4. jo ellen brandmeyer and hassan a. karimi. coupling methodologies for environmental models. environmental modelling and software, 15(5):479-488, 2000. 5. ralf hartmut guting, michael h. boehlen, martin erwig, christian s. jensen, nikos a. lorentzos, markus schneider, and michalis vazirgiannis. a foundation for representing and querying moving objects. acm trans. database syst., 25(1):1-42, 2000. 6. ralf hartmut guting and markus schneider. moving objects databases. morgan kaufmann, 2005. 7. christian s. jensen, jan kolář, torben bach pedersen, and igor timko. nearest neighbor queries in road networks. in gis, pages 1-8, 2003. 8. dimitris papadias, jun zhang, nikos mamoulis, and yufei tao. query processing in spatial network databases. in vldb, pages 802-813, 2003. 9. su chen, christian s. jensen, and dan lin. a benchmark for evaluating moving object indexes. pvldb, 1(2):1574-1585, 2008. 10. xiang li and hui lin 0002. indexing network-constrained trajectories for connectivitybased queries. international journal of geographical information science, 20(3):303328, 2006. 11. talel abdessalem, cédric du mouza, josé moreira, and philippe rigaux.management of large moving objects datasets: indexing, benchmarking and uncertainty in movement representation. in yannis manolopoulos, apostolos papadopoulos, and michael vassilakopoulos, editors, spatial databases, pages 225-249. idea group, 2005. 12. thomas brinkhoff. a framework for generating network-based moving objects. geoinformatica, 6(2):153-180, 2002. 13. bernard p. zeigler. multifacetted modelling and discrete event simulation. academic press professional, inc., san diego, ca, usa, 1984. geinformatics fce ctu 2009 42 richta t., hrubý m.: dynamic object-oriented geospatial modeling 14. bernard p. zeigler, herbert praehofer, and tag g. kim. theory of modeling and simulation, second edition. academic press, 2 edition, january 2000. 15. arturo i. concepcion and bernard p. zeigler. devs formalism: a framework for hierarchical model development. ieee trans. software eng., 14(2):228-241, 1988. 16. fernando j. barros. dynamic structure discrete event system specification: a new formalism for dynamic structure modeling and simulation. in winter simulation conference, pages 781-785, 1995. 17. jean-baptiste filippi and paul bisgambiglia. general methodology 2: enabling large scale and high definition simulation of natural systems with vector models and jdevs. in winter simulation conference, pages 1964-1970, 2002. acknowledgement: this work has been supported by the czech ministry of education under the research plan no. msm0021630528 ”security-oriented research in information technology”. geinformatics fce ctu 2009 43 geinformatics fce ctu 2009 44 postgis pro vývojáře pavel stěhule department of mapping and cartography, faculty of civil engineering czech technical university in prague stehule kix.fsv.cvut.cz kĺıčová slova: database systems, gis, development abstrakt systémy gis prvńı generace ukládaly svá data do soubor̊u v proprietárńıch formátech. daľśı generace těchto systém̊u dokázaly spolupracovat s databázemi (zejména čerpat data s databáźı). soudobé gis systémy se jǐz zcela spoléhaj́ı na databáze. tyto požadavky gis systém̊u musely reflektovat databázové systémy. nejstarš́ı sql systémy v̊ubec s prostorovými daty nepoč́ıtaly. úvod až standard ansi sql2000 v části věnované podpoře multimediálńıch dat obsahuje popis prostorových (spatial) dat. z komerčńıch databáźı je na prvńım mı́stě, co se týče podpory prostorových dat, databázový systém fy. oracle. poč́ınaje verźı oracle 7 existuje rozš́ı̌reńı oracle, samostatně distribuované, řeš́ıćı podporu prostorových dat. toto rozš́ı̌reńı se nazývá oracle spatial. odpověd́ı o.s. světa byla implementace podpory prostorových dat pro rdbms postgresql a to tak, jak předpokládá standard opengis. jednou z charakteristik postgresql je právě jeho rozšǐritelnost. v postgresql lze relativně jednoduše navrhnout vlastńı datové typy, vlastńı operace a operandy nad těmito typy. touto vlastnost́ı byl systém postgresql mezi o.s. databázovými systémy výjimečný. proto celkem logicky byl postgresql použit pro o.s. implementaci standardu opengis. část standardu opengis (chronologicky předcháźı sql2000) je zaměřena na sql databáze, které by měly sloužit jako uložǐstě prostorových dat. vycháźı z sql92 rozš́ı̌reného o geometrické typy (sql92 with geometry types), definuje metadata popisuj́ıćı funkcionalitu systému co se týká geometrických dat, a definuje datové schéma. databáze, jejichž datový model respektuje normu opengis, může sloužit jako datový server libovolné gis aplikaci, která vycháźı z tohoto standardu. dı́ky tomu může, v celé řadě př́ıpad̊u, postgresql zastoupit komerčńı db systémy. implementace standardu opengis pro postgresql se nazývá postgis. postgresql základńı geometrické typy má, úkolem postgisu je hlavně jejich obaleńı do specifického (určeného normou) datového modelu. sql/mm-spatial sice vycháźı z opengis nicméně neńı kompatibilńı. postgis je certifikován pro opengis, částečně také implementuje sql/mm. geinformatics fce ctu 2007 71 postgis pro vývojáře postgis obsahuje: � nové datové typy (geometry), � nové operátory (&& pr̊unik, @kompletně obsažen), � nové funkce (distance, transform), � nové tabulky (geometry columns, spatial ref sys) slouž́ı jako systémové tabulky, poskytuj́ı prostor pro metadata. standard opengis je množina dokument̊u detailně popisuj́ıćı aplikačńı rozhrańı a datové formáty v oblasti gis systémů. tato dokumentace je určena vývojář̊um a jej́ım ćılem je dosažeńı interoperability aplikaćı vyv́ıjených členy konsorcia open geospatial consortium (www stránky ogc). oblast, kterou pokrývá postgis je popsána v dokumentu ”opengis® simple features specification for sql” názvy datových typ̊u v sql/mm odvozených z opengisu vznikly spojeńım prefixu st (spatial type) a názvu datového typu v opengisu. opengis je obecněǰśı, poč́ıtá s minimálně dvěma variantami implementace geometrických typ̊u, a k tomu potřebnému zázemı́. sql/mm-spatial se dá s jistou mı́rou tolerance chápat jako podmnožina opengisu. implementace vlastńıch funkćı v postgresql vzhledem k faktu, že vlastńı datové typy lze implementovat pouze v prg. jazyce c a že postgis je implementován v c, bude popsán návrh modulu pouze s využit́ım jazyka c. při návrhu funkćı v c se prakticky všude použ́ıvaj́ı makra z postgresql knihovny. důvod̊u je několik: � použ́ıváńı univerzálńıho typu varlena pro typy s variabilńı délkou, který je v př́ıpadě, že je deľśı než 2k transparentně (jak pro uživatele, tak pro vývojáře) komprimován. � odlǐsný zp̊usob předáváńı parametr̊um (v postgresql nepř́ımo prostřednictv́ım určené hodnoty typu struct obsahuj́ıćı ukazatel na pole parametr̊u, pole s informaćı, který parametr je null, a počtem parametr̊u). � tento zp̊usob voláńı neńı závislý na programovaćım jazyku c ukázka implementace funkce concat text spojuj́ıćı dva řetězce dohromady: 01 pg_function_info_v1(concat_text); 02 03 datum 04 concat_text(pg_function_args) 05 { 06 text *arg1 = pg_getarg_text_p(0); 07 text *arg2 = pg_getarg_text_p(1); 08 int32 new_text_size = varsize(arg1) + varsize(arg2) varhdrsz; 09 text *new_text = (text *) palloc(new_text_size); 10 11 varatt_sizep(new_text) = new_text_size; 12 memcpy(vardata(new_text), vardata(arg1), varsize(arg1) varhdrsz); 13 memcpy(vardata(new_text) + (varsize(arg1) varhdrsz), 14 vardata(arg2), varsize(arg2) varhdrsz); 15 pg_return_text_p(new_text); 16 } geinformatics fce ctu 2007 72 postgis pro vývojáře komentáře: 01 registrace funkce s volaćı konvenćı 1. 03 každá funkce dosažitelná z sql muśı vracet univerzálńı datový typ datum (společný typ pro datové typy s fixńı velikost́ı (menš́ı než 64bite) a datový typ varlena). 06 źıskáńı prvńıho parametru (resp. ukazatele na něj a korektńı přetypováńı, př́ıpadně dekomprimace). 07 źıskáńı prvńıho parametru (resp. ukazatele na něj a korektńı přetypováńı, př́ıpadně dekomprimace). 08 datový typ varlena je podobný stringu v pascalu, prvńı čtyři byte obsahuj́ı údaj o délce s rozd́ılem, že údaj obsahuje velikost celé hodnoty včetně záhlav́ı. velikost hlavičky je uložena v konst. varhdrsz. výpočet velikosti vrácené hodnoty typu datum (součet velikost́ı obou řetězc̊u + velikost hlavičky). 09 postgresql má vlastńı správu paměti, tud́ıž se pamět’ nealokuje voláńım funkce malloc, ale palloc. postgresql přiděluje pamět’ z tzv. pamět’ových kontext̊u (persistentńı, transakce, voláńı funkce). při dokončeńı operace se uvolňuje odpov́ıdaj́ıćı pamět’ový kontext. explicitńı voláńı pfree má smysl jen u funkćı, které by bez explicitńıho uvolněńı paměti si nárokovali př́ılǐs paměti. použit́ı pamět’ových kontext̊u snižuje riziko tzv. memory leaku, tj. že vývojář zapomene vrátit alokovanou pamět’. také snižuj́ı náročnost vráceńı paměti. mı́sto několikanásobného uvolněńı malých blok̊u paměti se volá jednorázová operace. daľśım př́ıjemným efektem je nižš́ı fragmentace paměti. 15 přetypováńı z typu text na datum (př́ıpadná komprimace). každá interńı funkce se muśı před vlastńım použit́ım zaregistrovat pro jej́ı použit́ı v sql. 01 create function concat_text(text, text) returns text 02 as ’directory/funcs’, ’concat_text’ 03 language c strict; komentáře: 01 funkce concat text má dva parametry typu text a vraćı text. 02 tuto funkci postgresql nalezne v knihovně (souboru) ’directory/funcs’ pod názvem ’concat text’. 03 jedná se o binárńı knihovnu. v př́ıpadě, že jeden parametr je null, výsledkem voláńı funkce je hodnota null (atribut strict). pro voláńı existuj́ıćıch postgresql funkćı (pod volaj́ıćı konvenćı 1) muśıme použ́ıt specifický zp̊usob jejich voláńı, resp. předáńı parametr̊u. často použ́ıvanou funkćı je textin, což je funkce, která slouž́ı pro převod z-̌retězce (klasického řetězce v jazyce c ukončeného nulou) na řetězec typu varlena. následuj́ıćı funkce vrát́ı konstantńı řetězec ”hello world!”. 01 pg_function_info_v1(hello_world); 02 03 datum 04 hello_word(pg_function_args) 05 { 06 pg_return_datum(directfunctioncall1(textin, cstringgetdatum("hello world!"))); 07 } geinformatics fce ctu 2007 73 postgis pro vývojáře komentáře: 06 cstringgetdatum provád́ı pouze přetypováńı bez použit́ı funkce directfunctioncall1(1 na konci názvu funkce má význam jeden argument. tato funkce existuje ve variantách pro jeden až devět argument̊u.) by výše zmı́něná funkce vypadala následovně (z ukázky je vidět předáváńı parametr̊u v v1 volaj́ıćı konvenci): 01 pg_function_info_v1(hello_world); 02 03 datum 04 hello_word(pg_function_args) 05 { 06 functioncallinfodata locfcinfo; 07 08 initfunctioncallinfodata(locfcinfo, fcinfo->flinfo, 1, null, null); 09 10 locfcinfo.arg[0] = cstringgetdatum("hello world!") 11 locfcinfo.argnull[0] = false; 12 13 pg_return_datum(textin(&locfinfo)); 14 } komentáře: 08 datová struktura locfcinfo je inicializována pro jeden argument. 13 př́ımé voláńı funkce textin. jelikož tato funkce vraćı typ datum, který je výsledný typ funkce hello world, nedocháźı k přetypováńı. implementace vlastńıch datových typ̊u v postgresql v postgresql je každý datový typ určen svoj́ı vstupńı a výstupńı, a sadou operátor̊u a funkćı, které tento typ podporuj́ı. vstupńı funkce je funkce, která převád́ı řetězec na binárńı hodnotu. výstupńı funkce inverzně převád́ı binárńı hodnotu na řetězec. podporované operátory a funkce pak pracuj́ı s binárńı hodnotou. v př́ıpadě vkládáńı nových záznamů se vstupńı funkce volaj́ı automaticky, v př́ıpadě výraz̊u je nutné v některých př́ıpadech volat explicitńı konverzi. explicitńı konverze se v postgresql provede třemi r̊uznými zp̊usoby: � zápisem typu následovaný řetězcem � použit́ım ansi sql operátoru cast � použit́ım binárńıho operátoru pro přetypováńı ’::’ (neńı standardem) ukázka: select rodne_cislo ’7307150xxx’; select cast(’730715xxx’ as rodne_cislo); select ’730715xxx’::rodne_cislo; opengis, coby nezávislý standard, přidává vlastńı zp̊usob zápisu. v opengisu je poč́ıtáno i s variantou, že data jsou uložená textově, v př́ıpadě že databázový systém nelze rozš́ı̌rit o geometrické typy (což neńı př́ıpad postgresql). nicméně postgis nepouž́ıvá geometrické typy postgresql. typ se zapisuje př́ımo do řetězce následovaný vlastńımi daty, které jsou uzavřené v závorkách. tento zápis se označuje jako ’well-known text (wkt)’. pro přečteńı hodnoty tohoto typu se použ́ıvá funkce geometryfromtext. geinformatics fce ctu 2007 74 postgis pro vývojáře ukázka: insert into spatialtable ( the_geom, the_name ) values ( geomfromtext(’point(-126.4 45.32)’, 312), ’a place’); kromě textového formátu je v opengisu ještě definován binárńı formát ’well-know binary (wkb)’. konverzi z binárńıho formátu do textového (čitelná podoba) formátu provád́ı funkce asewkt (astext). interně postgis ukládá data v binárńım formátu wkb. pokud máme vstupńı a výstupńı funkci k dispozici, můžeme zaregistrovat nový datový typ př́ıkazem create type. 01 create or replace function rodne_cislo_in (cstring) 02 returns rodne_cislo as ’rc.so’,’rodne_cislo_in’ language ’c’; 03 04 create or replace function rc_out (rodne_cislo) 05 returns cstring as ’rc.so’, ’rodne_cislo_out’ language ’c’; 06 07 create type rodne_cislo ( 08 internallength = 8, 09 input = rc_in, 10 output = rc_out 11 ); komentáře: 08 jedná se o datový typ s pevnou délkou osmi bajt̊u. pokud chceme datovým typ použ́ıt v databázi, muśıme implementaci datového typu rozš́ı̌rit o implementaci základńıch binárńıch operátor̊u. poté bude možné použ́ıt vlastńı datový typ v klauzuli where a order by. 01 create or replace function rodne_cislo_eq (rodne_cislo, rodne_cislo) 02 returns bool as ’rc.so’,’rodne_cislo_eq’ language ’c’ strict; 03 04 create or replace function rodne_cislo_ne (rodne_cislo, rodne_cislo) 05 returns bool as ’rc.so’, ’rodne_cislo_ne’ language ’c’ strict; 06 07 create or replace function rodne_cislo_lt (rodne_cislo, rodne_cislo) 08 returns bool as ’rc.so’,’rodne_cislo_lt’ language ’c’ strict; 09 10 create or replace function rodne_cislo_le (rodne_cislo, rodne_cislo) 11 returns bool as ’rc.so’, ’rodne_cislo_le’ language ’c’ strict; 12 13 create operator = ( 14 leftarg = rodne_cislo, 15 rightarg = rodne_cislo, 16 procedure = rodne_cislo_eq 17 commutator = =, 18 negator = <>, 19 restrict = eqsel, 20 join = eqjoinsel 21 ); 22 23 create operator <> ( 24 leftarg = rodne_cislo, 25 rightarg = rodne_cislo, 26 procedure = rodne_cislo_ne 27 ); 28 29 create operator <( 30 leftarg = rodne_cislo, 31 rightarg = rodne_cislo, 32 procedure = rodne_cislo_le 33 ); 34 geinformatics fce ctu 2007 75 postgis pro vývojáře 35 create operator <= ( 36 leftarg = rodne_cislo, 37 rightarg = rodne_cislo, 38 procedure = rodne_cislo_le 39 ); komentáře: 13 registrace binárńıho operátoru rovno. 14 levý argument je typu rodné č́ıslo. 15 pravý argument je typu rodné č́ıslo. 16 název funkce, která zajǐst’uje operaci porovnáńı pro datový typ rodne cislo. 17 rovná se je komutátorem sama sebe, nebot’ plat́ı že x = y <=> y = x. 18 plat́ı, že x = y <=> n ot (x <> y). 19 operátor je silně restriktivńı v př́ıpadě, že jedńım argumentem je konstanta, tj. výsledkem je malá podmnožina tabulky. 20 operátoru se přǐrazuje funkce odhadu selektivity. výše uvedené operátory stále nestač́ı k tomu, aby se nad sloupcem s vlastńım typem mohl vytvořit index. každý datový typ muśı mı́t definovanou alespoň jednu tř́ıdu operátor̊u, což je v podstatě seznam operátor̊u doplněný o jejich sémantický význam. kromě operátor̊u je potřeba určit tzv. podp̊urnou funkci f (a, b), jej́ıž parametry jsou kĺıče a výsledkem celé č́ıslo (a > b => f (a, b) = 1; a = b => f (a, b) = 0; a < b => f (a, b) = −1). 01 create or replace function rodne_cislo_cmp (rodne_cislo, rodne_cislo) 02 returns int as ’rc.so’, ’rodne_cislo__cmp’ language ’c’ strict; 03 04 create operator class rodne_cislo_ops 05 default for type rodne_cislo using btree as 06 operator 1 <, 07 operator 2 <=, 08 operator 3 = , 09 operator 4 >=, 10 operator 5 >, 11 function 1 rodne_cislo_cmp (rodne_cislo, rodne_cislo); komentáře: 06 strategie 1 má význam menš́ı než. 07 strategie 2 má význam menš́ı rovno než. 08 strategie 3 má význam rovno. 09 strategie 4 má význam vetš́ı rovno než. 10 strategie 5 má význam větš́ı než. 11 určeńı podp̊urné funkce. geinformatics fce ctu 2007 76 postgis pro vývojáře ukázka použit́ı postgisu opengis předpokládá uložeńı dat do klasických databázových tabulek. nicméně k tomu, aby tyto tabulky dokázali přeč́ıst gis aplikace je nutné do datového schématu přidat dvě systémové (z pohledu opengis) tabulky. geometry columns obsahuje informace o sloupćıch geometrii geoprvk̊u, spatial ref sys obsahuje informace o souřadnicových systémech použ́ıvaných systémem. 01 create table user_locations (gid int4, user_name varchar); 02 select addgeometrycolumn (’db_mapbender’,’user_locations’,’the_geom’,’4326’,’point’,2); 03 insert into user_locations values (’1’,’morissette’,geometryfromtext(’point(-71.060316 48.432044)’, 4326)); komentáře: 01 vytvořeńı tabulky fakt̊u (feature table) 02 do tabulky user location přidá sloupec s názvem the geom (následně přidá do tabulky geometry columns nový řádek s metadaty o sloupci the geom) 03 plněńı tabulky daty kromě plněńı datových tabulek pomoćı sql př́ıkaz̊u postgis obsahuje nástroj pro import datových (shape) soubor̊u, tzv. shape loader. dı́ky němu je možné importovat data v několika formátech. namátkou podporované formáty jsou shape, mapinfo, dgn, gml. k urychleńı operaćı prováděných nad prostorovými daty lze použ́ıt prostorový index. postgresql podporuje několik typ̊u index̊u, pro gis lze použ́ıt (ve verzi 8.2) formáty gist a gin. 01 create index 02 on 03 using gist ( [ gist_geometry_ops ] ); komentáře: 03 gist geometry ops určuje výše zmı́něnou tř́ıdu operátor̊u. analýza obsahu distribuce postgis 1.2.1 struktura adresáře: � ./ – sestavovaćı a instalačńı skripty � ./lwgeom – zdrojový kód knihoven � ./java/ejb – podpora ejb java � ./java/jdbc – jdbc ovladač pro postgresql rozš́ı̌rený o podporu gis objekt̊u � ./java/pljava – postgresql pl/java rozš́ı̌rená o prostorové objekty � ./doc – dokumentace � ./loader – programy zajǐst’uj́ıćı konverzi esri shape soubor̊u do sql (resp. postgis) a inverzńı transformaci postgis dat do shape soubor̊u (pgsql2shp a shp2pgsql) geinformatics fce ctu 2007 77 postgis pro vývojáře � ./topology – počátečńı implementace modelu topology � ./utils – pomocné skripty (aktualizace, profilace) � ./extras – kód, který se nedostal do hlavńıho stromu (wfs locks, ukázka wkb parseru) � ./regress – regresńı testy závislosti: � proj4 knihovna realizuj́ıćı transformace mezi projekcemi � geos – knihovna implementuj́ıćı topologické testy (postgresql je třeba překládat s podporou c++) typ lwgeom nahrazuje p̊uvodńı typ geometry postgisu. oproti němu je menš́ı (data jsou uložená binárně – pro point(0,0) je to úspora z 140 bajt̊u na 21 bajt̊u), podporuje 1d, 2d, 3d a 4d souřadnice, interně vycháźı z ogc wkb typu a také jeho textová prezentace je ogc wkb. typ lwgeom nahradil předchoźı typ geometry ve verzi 1.0. lw znamená light weight (vylehčený). interně v postgresql se použ́ıvá identifikátor pg lwgeom. hodnoty se serializuj́ı rekurzivně, za hlavičkou specifikuj́ıćı typ a atributy se serializuj́ı vlastńı data. jádro postgisu je schované v implementaci typu lwgeom. jako parser je, v prostřed́ı unix obvyklý, použitý generátor překladač̊u lex a yacc (konkrétně jejich implementace bison). syntaxe je určena v souborech wktparse.lex (lexikálńı elementy, kĺıčová slova, č́ısla) a wktparse.y (syntaktické elementy) 01 /* multipoint */ 02 03 geom_multipoint : 04 multipoint { alloc_multipoint(); } multipoint { pop(); } 05 | 06 multipointm { set_zm(0, 1); alloc_multipoint(); } multipoint {pop(); } 07 08 multipoint : 09 empty 10 | 11 { alloc_counter(); } lparen multipoint_int rparen { pop(); } 12 13 multipoint_int : 14 mpoint_element 15 | 16 multipoint_int comma mpoint_element 17 18 mpoint_element : 19 nonempty_point 20 | 21 /* this is to allow multipoint(0 0, 1 1) */ 22 { alloc_point(); } a_point { pop(); } komentáře: 03 povoleným zápisem je multipoint seznam nebo multipointm seznam. 08 seznam může být prázdný nebo je posloupnost́ı č́ısel uzavřený v závorkách. 13 seznam je bud’to o jednom prvku nebo seznam a čárkou oddělený element. 16 rekurzivńı definice seznamu. geinformatics fce ctu 2007 78 postgis pro vývojáře 22 a point je 2d 3d 4d hodnota zapsaná jako posloupnost n č́ısel oddělených mezerou. serializace a deserializace načteného syntaktického stromu je řešena v souboru lwgeom inout.c. 01 create function geometry_in(cstring) 02 returns geometry 03 as ’@module_filename@’,’lwgeom_in’ 04 language ’c’ _immutable_strict; -with (isstrict,iscachable); 05 06 create function geometry_out(geometry) 07 returns cstring 08 as ’@module_filename@’,’lwgeom_out’ 09 language ’c’ _immutable_strict; -with (isstrict,iscachable); 10 11 create type geometry ( 12 internallength = variable, 13 input = geometry_in, 14 output = geometry_out, 15 ); definice typu geometry je v souboru lwpostgis.sql.in spolu s definicemi daľśıch deśıtek databázových objekt̊u. zcela zásadńı jsou tabulky spatial ref sys a geometry columns. 01 ------------------------------------------------------------------02 -spatial_ref_sys 03 ------------------------------------------------------------------04 create table spatial_ref_sys ( 05 srid integer not null primary key, 06 auth_name varchar(256), 07 auth_srid integer, 08 srtext varchar(2048), 09 proj4text varchar(2048) 10 ); 11 12 ------------------------------------------------------------------13 -geometry_columns 14 ------------------------------------------------------------------15 create table geometry_columns ( 16 f_table_catalog varchar(256) not null, 17 f_table_schema varchar(256) not null, 18 f_table_name varchar(256) not null, 19 f_geometry_column varchar(256) not null, 20 coord_dimension integer not null, 21 srid integer not null, 22 type varchar(30) not null, 23 constraint geometry_columns_pk primary key ( 24 f_table_catalog, 25 f_table_schema, 26 f_table_name, 27 f_geometry_column ) 28 ) with oids; řada funkćı postgisu jsou realizována v jazyce pl/pgsql. což je jazyk sql procedur v prostřed́ı postgresql vycházej́ıćı z pl/sql fy. oracle (který vycháźı z prg. jazyka ada). je to celkem logické, d́ıky integraci sql jsou sql př́ıkazy zapsány úsporně a čitelně. 001 ----------------------------------------------------------------------002 -addgeometrycolumn 003 -, , , , , , 004 ----------------------------------------------------------------------005 -006 -type can be one of geometry, geometrycollection, point, multipoint, polygon, 007 -multipolygon, linestring, or multilinestring. 008 -009 -types (except geometry) are checked for consistency using a check constraint 010 -uses sql alter table command to add the geometry column to the table. 011 -addes a row to geometry_columns. geinformatics fce ctu 2007 79 postgis pro vývojáře 012 -addes a constraint on the table that all the geometries must have the same 013 -srid. checks the coord_dimension to make sure its between 0 and 3. 014 -should also check the precision grid (future expansion). 015 -calls fix_geometry_columns() at the end. 016 -017 ----------------------------------------------------------------------018 createfunction addgeometrycolumn(varchar,varchar,varchar,varchar,integer,varchar,integer) 019 returns text 020 as 021 ’ komentáře: 018 tento zdrojový kód se v postgisu zpracovává ještě preprocesorem, takže na prvńı pohled neplatné kĺıčové slovo createfunction je správné. důvodem je potřeba jedné verze zdrojových kód̊u použitelných pro r̊uzné verze postgresql, kdy mezi nejstarš́ı verźı 7.4 a nejnověǰśı 8.2 je zřetelný rozd́ıl v možnostech a i v zápisu uložených procedur. jinak tyto verze od sebe děĺı tři roky. přestože oficiálně nejstarš́ı podporovaná verze je 7.4, v kodu je řada odkaz̊u na verzi 7.2. 022 declare 023 catalog_name alias for $1; 024 schema_name alias for $2; 025 table_name alias for $3; 026 column_name alias for $4; 027 new_srid alias for $5; 028 new_type alias for $6; 029 new_dim alias for $7; 030 rec record; 031 schema_ok bool; 032 real_schema name; 033 begin 034 035 if ( not ( (new_type =’’geometry’’) or 036 (new_type =’’geometrycollection’’) or 037 (new_type =’’point’’) or 038 (new_type =’’multipoint’’) or 039 (new_type =’’polygon’’) or 040 (new_type =’’multipolygon’’) or 041 (new_type =’’linestring’’) or 042 (new_type =’’multilinestring’’) or 043 (new_type =’’geometrycollectionm’’) or 044 (new_type =’’pointm’’) or 045 (new_type =’’multipointm’’) or 046 (new_type =’’polygonm’’) or 047 (new_type =’’multipolygonm’’) or 048 (new_type =’’linestringm’’) or 049 (new_type =’’multilinestringm’’) or 050 (new_type = ’’circularstring’’) or 051 (new_type = ’’circularstringm’’) or 052 (new_type = ’’compoundcurve’’) or 053 (new_type = ’’compoundcurvem’’) or 054 (new_type = ’’curvepolygon’’) or 055 (new_type = ’’curvepolygonm’’) or 056 (new_type = ’’multicurve’’) or 057 (new_type = ’’multicurvem’’) or 058 (new_type = ’’multisurface’’) or 059 (new_type = ’’multisurfacem’’)) ) 060 then 061 raise exception ’’invalid type name valid ones are: 062 geometry, geometrycollection, point, 063 multipoint, polygon, multipolygon, 064 linestring, multilinestring, 065 circularstring, compoundcurve, geinformatics fce ctu 2007 80 postgis pro vývojáře 066 curvepolygon, multicurve, multisurface, 067 geometrycollectionm, pointm, 068 multipointm, polygonm, multipolygonm, 069 linestringm, multilinestringm 070 circularstringm, compoundcurvem, 071 curvepolygonm, multicurvem or multisurfacem’’; 072 return ’’fail’’; 073 end if; 074 075 if ( (new_dim >4) or (new_dim <0) ) then 076 raise exception ’’invalid dimension’’; 077 return ’’fail’’; 078 end if; 079 080 if ( (new_type like ’’%m’’) and (new_dim!=3) ) then 081 082 raise exception ’’typem needs 3 dimensions’’; 083 return ’’fail’’; 084 end if; 085 086 if ( schema_name != ’’’’ ) then 087 schema_ok = ’’f’’; 088 for rec in select nspname from pg_namespace where text(nspname) = schema_name loop 089 schema_ok := ’’t’’; 090 end loop; 091 092 if ( schema_ok <> ’’t’’ ) then 093 raise notice ’’invalid schema name using current_schema()’’; 094 select current_schema() into real_schema; 095 else 096 real_schema = schema_name; 097 end if; 098 099 else 100 select current_schema() into real_schema; 101 end if; 102 103 -add geometry column 104 105 execute ’’alter table ’’ || 106 quote_ident(real_schema) || ’’.’’ || quote_ident(table_name) 107 || ’’ add column ’’ || quote_ident(column_name) || 108 ’’ geometry ’’; 109 komentáře: 105 prostřednictv́ım dynamického sql přidává sloupec do ćılové tabulky fakt̊u. 110 -delete stale record in geometry_column (if any) 111 112 execute ’’delete from geometry_columns where 113 f_table_catalog = ’’ || quote_literal(’’’’) || 114 ’’ and f_table_schema = ’’ || 115 quote_literal(real_schema) || 116 ’’ and f_table_name = ’’ || quote_literal(table_name) || 117 ’’ and f_geometry_column = ’’ || quote_literal(column_name); 118 119 -add record in geometry_column 120 121 execute ’’insert into geometry_columns values (’’ || 122 quote_literal(’’’’) || ’’,’’ || 123 quote_literal(real_schema) || ’’,’’ || 124 quote_literal(table_name) || ’’,’’ || 125 quote_literal(column_name) || ’’,’’ || 126 new_dim || ’’,’’ || new_srid || ’’,’’ || geinformatics fce ctu 2007 81 postgis pro vývojáře 127 quote_literal(new_type) || ’’)’’; 128 komentáře: 112 prostřednictv́ım dynamického sql ruš́ı sloupec, pokud byl takový, v tabulce metadat geometry columns. 121 prostřednictv́ım dynamického sql vkládá metadata o sloupci do tabulky geometry columns. 129 -add table checks 130 131 execute ’’alter table ’’ || 132 quote_ident(real_schema) || ’’.’’ || quote_ident(table_name) 133 || ’’ add constraint ’’ 134 || quote_ident(’’enforce_srid_’’ || column_name) 135 || ’’ check (srid(’’ || quote_ident(column_name) || 136 ’’) = ’’ || new_srid || ’’)’’ ; 137 138 execute ’’alter table ’’ || 139 quote_ident(real_schema) || ’’.’’ || quote_ident(table_name) 140 || ’’ add constraint ’’ 141 || quote_ident(’’enforce_dims_’’ || column_name) 142 || ’’ check (ndims(’’ || quote_ident(column_name) || 143 ’’) = ’’ || new_dim || ’’)’’ ; 144 145 if (not(new_type = ’’geometry’’)) then 146 execute ’’alter table ’’ || 147 quote_ident(real_schema) || ’’.’’ || quote_ident(table_name) 148 || ’’ add constraint ’’ 149 || quote_ident(’’enforce_geotype_’’ || column_name) 150 || ’’ check (geometrytype(’’ || 151 quote_ident(column_name) || ’’)=’’ || 152 quote_literal(new_type) || ’’ or (’’ || 153 quote_ident(column_name) || ’’) is null)’’; 154 end if; 155 156 return 157 real_schema || ’’.’’ || 158 table_name || ’’.’’ || column_name || 159 ’’ srid:’’ || new_srid || 160 ’’ type:’’ || new_type || 161 ’’ dims:’’ || new_dim || chr(10) || ’’ ’’; 162 end; 163 ’ 164 language ’plpgsql’ _volatile_strict; -with (isstrict); 165 166 ---------------------------------------------------------------------------167 -addgeometrycolumn ( ,
, , , , ) 168 ---------------------------------------------------------------------------169 -170 -this is a wrapper to the real addgeometrycolumn, for use 171 -when catalogue is undefined 172 -173 ---------------------------------------------------------------------------174 createfunction addgeometrycolumn(varchar,varchar,varchar,integer,varchar,integer) returns text as ’ 175 declare 176 ret text; 177 begin 178 select addgeometrycolumn(’’’’,$1,$2,$3,$4,$5,$6) into ret; 179 return ret; 180 end; 181 ’ 182 language ’plpgsql’ _stable_strict; -with (isstrict); geinformatics fce ctu 2007 82 postgis pro vývojáře 183 184 ---------------------------------------------------------------------------185 -addgeometrycolumn (
, , , , ) 186 ---------------------------------------------------------------------------187 -188 -this is a wrapper to the real addgeometrycolumn, for use 189 -when catalogue and schema are undefined 190 -191 ---------------------------------------------------------------------------192 createfunction addgeometrycolumn(varchar,varchar,integer,varchar,integer) returns text as ’ 193 declare 194 ret text; 195 begin 196 select addgeometrycolumn(’’’’,’’’’,$1,$2,$3,$4,$5) into ret; 197 return ret; 198 end; 199 ’ 200 language ’plpgsql’ _volatile_strict; -with (isstrict); komentáře: 174 přet́ıžeńı funkćı (tj. existuje v́ıce funkćı stejného jména s r̊uznými parametry) se v postgresql (dle standardu ansi) použ́ıvá také k náhradě nepodporovaných volitelných parametr̊u. funkce definované na řádćıch 174 a 192 se použ́ıvaj́ı v př́ıpadě, že chyb́ı hodnoty parametr̊u katalog a schéma. v ansi sql se nepouž́ıvá termı́n databáze, ale katalog, který obsahuje jemněǰśı děleńı na jednotlivá schémata. zdrojový kód esri arcsde podporovaná podmnožiny sql/mm funkćı je v souboru sqlmm.sql. 01 -postgis equivalent function: ndims(geometry) 02 createfunction st_coorddim(geometry) 03 returns smallint 04 as ’@module_filename@’, ’lwgeom_ndims’ 05 language ’c’ _immutable_strict; -with (iscachable,isstrict); 06 07 -postgis equivalent function: geometrytype(geometry) 08 createfunction st_geometrytype(geometry) 09 returns text 10 as ’ 11 declare 12 gtype text := geometrytype($1); 13 begin 14 if (gtype in (’’point’’, ’’pointm’’)) then 15 gtype := ’’point’’; 16 elsif (gtype in (’’linestring’’, ’’linestringm’’)) then 17 gtype := ’’linestring’’; 18 elsif (gtype in (’’polygon’’, ’’polygonm’’)) then 19 gtype := ’’polygon’’; 20 elsif (gtype in (’’multipoint’’, ’’multipointm’’)) then 21 gtype := ’’multipoint’’; 22 elsif (gtype in (’’multilinestring’’, ’’multilinestringm’’)) then 23 gtype := ’’multilinestring’’; 24 elsif (gtype in (’’multipolygon’’, ’’multipolygonm’’)) then 25 gtype := ’’multipolygon’’; 26 else 27 gtype := ’’geometry’’; 28 end if; 29 return ’’st_’’ || gtype; 30 end 31 ’ 32 language ’plpgsql’ _immutable_strict; -with (isstrict,iscachable); komentáře: geinformatics fce ctu 2007 83 postgis pro vývojáře 02 vytvořeńı synonyma pro postgis funkci. 08 zapouzdřeńı postgis funkce kódem v plpgsql. v tomto př́ıpadě se st́ırá rozd́ıl mezi typy multipoint a multipointm (analogicky u daľśıch typ̊u). řešeńı výkonných funkćı v postgisu kromě vlastńı implementace datových typ̊u postgis obsahuje implementaci pomocných funkćı nad prostorovými daty. následuj́ıćı př́ıklady jsou funkce z lwgeom functions basic.c, které mohou sloužit jako vzor pro vytvářeńı vlastńıch funkćı. funkce lwgeom makepoint se použ́ıvá pro vytvořeńı 2d bodu na základě zadaných souřadnic. 01 pg_function_info_v1(lwgeom_makepoint); 02 datum lwgeom_makepoint(pg_function_args) 03 { 04 double x,y,z,m; 05 lwpoint *point; 06 pg_lwgeom *result; 07 08 x = pg_getarg_float8(0); 09 y = pg_getarg_float8(1); 10 11 if ( pg_nargs() == 2 ) point = make_lwpoint2d(-1, x, y); 12 else if ( pg_nargs() == 3 ) { 13 z = pg_getarg_float8(2); 14 point = make_lwpoint3dz(-1, x, y, z); 15 } 16 else if ( pg_nargs() == 4 ) { 17 z = pg_getarg_float8(2); 18 m = pg_getarg_float8(3); 19 point = make_lwpoint4d(-1, x, y, z, m); 20 } 21 else { 22 elog(error, "lwgeom_makepoint: unsupported number of args: %d", 23 pg_nargs()); 24 pg_return_null(); 25 } 26 27 result = pglwgeom_serialize((lwgeom *)point); 28 29 pg_return_pointer(result); 30 } komentáře: 01, 02 standardńı záhlav́ı funkce pro v1 volaj́ıćı konvenci 08, 09 źıskáńı prvńıch dvou argument̊u typu float8 11 pokud počet argument̊u funkce je roven dvěma, volá se exterńı funkce make lwpoint2d, jinak se zjǐst’uje počet argument̊u a podle něj se volá odpov́ıdaj́ıćı verze exterńı funkce. 22 funkce elog se použ́ıvá pro vyvoláńı výjimky, pokud je level error. v př́ıpadě, že level je notice, zobraźı lad́ıćı hlášeńı. 24 kód za elog(error,...) se již neprovád́ı. v tomto př́ıpadě pg return null() slouž́ı k utǐseńı překladače ohledně zobrazeńı varováńı. 27 serializace objektu do typu pg lwgeom. geinformatics fce ctu 2007 84 postgis pro vývojáře 29 výstupem z funkce je ukazatel na serializovanou hodnotu objektu, provede se konverze na typ datum. sql registrace této funkce vypadá následovně: 01 createfunction makepoint(float8, float8) 02 returns geometry 03 as ’@module_filename@’, ’lwgeom_makepoint’ 04 language ’c’ _immutable_strict; -with (iscachable,isstrict); 05 06 createfunction makepoint(float8, float8, float8) 07 returns geometry 08 as ’@module_filename@’, ’lwgeom_makepoint’ 09 language ’c’ _immutable_strict; -with (iscachable,isstrict); 10 11 createfunction makepoint(float8, float8, float8, float8) 12 returns geometry 13 as ’@module_filename@’, ’lwgeom_makepoint’ 14 language ’c’ _immutable_strict; -with (iscachable,isstrict); komentáře: 01, 06, 11 tato implementace je ukázkou přet́ıžené funkce (s r̊uzným počtem parametr̊u), kdy všechny varianty této přet́ıžené funkce sd́ıĺı jednu funkci implementovanou v jazyce c. funkce lwgeom inside circle point slouž́ı k určeńı, zda-li je bod uvnitř nebo vně kruhu. 01 pg_function_info_v1(lwgeom_inside_circle_point); 02 datum lwgeom_inside_circle_point(pg_function_args) 03 { 04 pg_lwgeom *geom; 05 double cx = pg_getarg_float8(1); 06 double cy = pg_getarg_float8(2); 07 double rr = pg_getarg_float8(3); 08 lwpoint *point; 09 point2d pt; 10 11 geom = (pg_lwgeom *)pg_detoast_datum(pg_getarg_datum(0)); 12 point = lwpoint_deserialize(serialized_form(geom)); 13 if ( point == null ) { 14 pg_free_if_copy(geom, 0); 15 pg_return_null(); /* not a point */ 16 } 17 18 getpoint2d_p(point->point, 0, &pt); 19 20 pg_free_if_copy(geom, 0); 21 22 pg_return_bool(lwgeom_pt_inside_circle(&pt, cx, cy, rr)); 23 } komentáře: 11 z toast hodnoty muśıme źıskat serializovanou hodnotu typu pg lwgeom. toast je pro uživatele databáze (nikoliv pro vývojáře) transparentńı mechanismus zajǐst’uj́ıćı kompresi a uložeńı serializovaných řetězc̊u deľśıch než 2kb. postgresql interně použ́ıvá datové stránky o 8kb a žádná do databáze uložená hodnota (vyjma tzv. blobu) nemůže tuto velikost přesáhnout. toto omezeńı se obcháźı právě metodou nazvanou toast, kdy se deľśı hodnoty děĺı a ukládaj́ı do speciálńı tabulky do v́ıce řádk̊u po maximálně 2kb). 12 deserializace typu point. geinformatics fce ctu 2007 85 postgis pro vývojáře 14 bezpečné uvolněńı paměti (celá řada typ̊u se přenáš́ı hodnotou, a tud́ıž je nelze chápat jako ukazatele a nelze dealokovat pamět’ na kterou by se odkazovaly). 18 konverze typu lwpoint na typ point2d. 22 vráceńı návratové hodnoty jako výsledku voláńı funkce lwgeom pt inside circle. definice funkce lwgeom pt inside circle (measures.c): 01 lwgeom_pt_inside_circle(point2d *p, double cx, double cy, double rad) 02 { 03 point2d center; 04 05 center.x = cx; 06 center.y = cy; 07 08 if ( distance2d_pt_pt(p, ¢er) < rad ) return 1; 09 else return 0; 10 } funkce lwgeom inside circle point je registrována sql př́ıkazem: 01 createfunction point_inside_circle(geometry,float8,float8,float8) 02 returns bool 03 as ’@module_filename@’, ’lwgeom_inside_circle_point’ 04 language ’c’ _immutable_strict; -with (isstrict); podpora indexu typu gist v postgisu indexy typu r-tree jsou specifické právě pro prostorová v́ıcedimenzionálńı data (a pro ně byly navrženy). aktuálńı verze postgresql nab́ıźı již daľśı generaci této tř́ıdy databázových index̊u a to tzv. gist (generalized search tree) indexy. jejich princip je stejný, širš́ı je ale jejich uplatněńı. gist indexy se v postgresql použ́ıvaj́ı pro fulltext, indexováńı obsahu poĺı, vlastńı podporu hierarchických dat a také samozřejmě pro geometrické typy. jak již bylo zmı́něno, r-tree index předpokládá, že indexovaná data budou mı́t minimálně dvě dimenze. index má stromovou strukturu, a každý nekoncový uzel obsahuje jednak odkaz na své potomky a hlavně geometrii nejmenš́ıho pravoúhlého n-rozměrného tělese obsahuj́ıćıho všechny potomky. gist je aplikačńı rozhrańı, které umožňuje implementaci libovolného typu indexu: b-tree, rtree. výhodou gist index̊u je možnost vytvořeńı doménově specifických index̊u vázaných na vlastńı typy vývojář̊um znalým doménové oblasti bez toho, aby se nutně staly databázovými specialisty (rozhodně ale implementace gist indexu nepatř́ı mezi triviálńı programováńı). ukázkovým př́ıkladem je použit́ı gist indexu v postgisu. kritériem, které se použije pro rozhodováńı, zda-li použ́ıt b-tree index nebo gist index jsou operace, které chceme urychlit indexem. pokud nám postačuje množina binárńıch operátor̊u <, =, >, pak je na mı́stě uvažovat o b-tree indexu. v opačném př́ıpadě nezbývá než použ́ıt gist index, který je obecněǰśı než b-tree index. prostorové indexy se daj́ı použ́ıt i pro klasická data. jednodimenzionálńı data se ale muśı předt́ım převést na v́ıcedimenzionálńı. ukázkovým př́ıpadem selháńı jednodimenzionálńıho př́ıpadu je následuj́ıćı př́ıklad. mějme databázi událost́ı popsanou časem zahájeńı (start time) a časem ukončeńı (end time). pokud budeme cht́ıt vypsat události, které prob́ıhaly v určitém čase naṕı̌seme dotaz s podmı́nkou 01 where star_time < t and end_time > (t + n) geinformatics fce ctu 2007 86 postgis pro vývojáře indexace sloupc̊u start time a end time zcela jistě pomůže, nicméně v tomto př́ıpadě maj́ı indexy malou selektivitu (v pr̊uměru oba vraćı polovinu řádk̊u z tabulky). start time a end time jsou dvě jednodimenzionálńı řady, takže se celkem přirozeně nab́ıźı chápat je jako jednu dvou dimenzionálńı řadu a předchoźı podmı́nku transformovat do tvaru postaveného nad geometrickými operátory. 01 where box(point(start_time t, start_time t), 02 point(end_time (t + n), end_time (t + n)) @ 03 box(point(start_time, start_time), 04 point(end_time, end_time)) komentáře: 01 zápis neńı validńı. z d̊uvodu čitelnosti neobsahuje nezbytnou konverzi z typu date na celá č́ısla. 02 operátor @ má význam kompletně obsažen. popis a použit́ı gist indexu gist index je vyvážený strom obsahuj́ıćı vždy dvojice kĺıč (predikát), ukazatel na data na hraničńıch uzlech stromu (listech) a dvojice tvrzeńı, ukazatel na potomky ve vnitřńıch uzlech stromu. dvojice tvrzeńı, ukazatel se označuje jako záznam indexu. každý uzel může obsahovat v́ıce záznamů indexu. tvrzeńı (predicates) je vždy platné pro všechny kĺıče dostupné z daného uzlu. to je koncept, který se objevuje ve všech na stromech založených indexech. u již zmı́něného r-tree indexu je tvrzeńım ohraničuj́ıćı obdélńık obsahuj́ıćı všechny body (r-tree je index navržený pro prostorová data), které jsou dostupné z vnitřńıho uzlu. každý gist index je definován následuj́ıćımi operacemi: operace nad kĺıči – tyto metody jsou specifické pro danou tř́ıdu objekt̊u a defacto určuj́ı konfiguraci gist indexu (key methods): consistent, union, compress, decompress, penalty, picksplit operace nad stromem – obecné operace, které volaj́ı operace nad kĺıči (tree methods): search (consistent), insert (penalty, picksplit), delete (consistent). operace nad kĺıči (v závorce smysl operace v př́ıpadě prostorových dat): základńı datovou strukturou použ́ıvanou ve funkćıch implementuj́ıćıch gist index je gistentry: 01 /* 02 * an entry on a gist node. contains the key, as well as its own 03 * location (rel,page,offset) which can supply the matching pointer. 04 * leafkey is a flag to tell us if the entry is in a leaf node. 05 */ 06 typedef struct gistentry 07 { 08 datum key; 09 relation rel; 10 page page; 11 offsetnumber offset; 12 bool leafkey; 13 } gistentry; komentáře: geinformatics fce ctu 2007 87 postgis pro vývojáře consistent pokud v záznamu indexu je zaručeno, že tvrzeńı nevyhovuje dotazu s danou hodnotou, pak vraćı logickou hodnotu nepravda. jinak vraćı logickou hodnotu pravda. pokud operace nesprávně vrát́ı log. hodnotu pravda, pak tato chyba nemá vliv na výsledek, pouze ovlivńı efektivitu algoritmu (true, pokud docháźı k překryvu, jinak false). union pro danou množinu záznamů indexu vraćı takové tvrzeńı, které je platné pro všechny záznamy v množině. compress převád́ı data do vhodného formátu pro fyzické uložeńı na stránce indexu (v př́ıpadě prostorových dat se urč́ı hraničńı obdélńık). decompress opak metody compress. převád́ı binárńı reprezentaci indexu tak, aby s ńı mohlo api manipulovat (načte se hraničńı obdélńık). penalty vraćı hodnotu ve významu ceny za vložeńı nové položky do konkrétńı části stromu. položka je vložena do té části stromu, kde je tato hodnota (penalty) nejnižš́ı (zjǐst’uje se, o kolik by se zvětšila plocha hraničńıho obdélńıku). picksplit ve chv́ıli, kdy je nutné rozdělit stránku indexu, tato funkce určuje, které položky z̊ustanou na p̊uvodńı stránce, a které se přesunou na novou stranu indexu. same vraćı logickou hodnotu pravda pokud jsou dvě položky identické. 09 identifikátor tabulky 10 identifikace datové stránky 11 pozice na datové stránce a gistentryvector 01 /* 02 * vector of gistentry structs; user-defined methods union and pick 03 * split takes it as one of their arguments 04 */ 05 typedef struct 06 { 07 int32 n; /* number of elements */ 08 gistentry vector[1]; 09 } gistentryvector; 10 11 #define gevhdrsz (offsetof(gistentryvector, vector)) následuj́ıćı př́ıklad je ukázkou metody union, která vraćı nejmenš́ı možný obdélńık pro všechny zadané body: 01 pg_function_info_v1(lwgeom_gist_union); 02 datum lwgeom_gist_union(pg_function_args) 03 { 04 gistentryvector *entryvec = (gistentryvector *) pg_getarg_pointer(0); 05 int *sizep = (int *) pg_getarg_pointer(1); 06 int numranges, 07 i; 08 box2dfloat4 *cur, 09 *pageunion; 10 11 numranges = entryvec->n; 12 cur = (box2dfloat4 *) datumgetpointer(entryvec->vector[0].key); 13 14 pageunion = (box2dfloat4 *) palloc(sizeof(box2dfloat4)); geinformatics fce ctu 2007 88 postgis pro vývojáře 15 memcpy((void *) pageunion, (void *) cur, sizeof(box2dfloat4)); 16 17 for (i = 1; i < numranges; i++) 18 { 19 cur = (box2dfloat4*) datumgetpointer(entryvec->vector[i].key); 20 21 if (pageunion->xmax < cur->xmax) 22 pageunion->xmax = cur->xmax; 23 if (pageunion->xmin > cur->xmin) 24 pageunion->xmin = cur->xmin; 25 if (pageunion->ymax < cur->ymax) 26 pageunion->ymax = cur->ymax; 27 if (pageunion->ymin > cur->ymin) 28 pageunion->ymin = cur->ymin; 29 } 30 31 *sizep = sizeof(box2dfloat4); 32 33 pg_return_pointer(pageunion); 34 } komentáře: 04 prvńı argument obsahuje ukazatel na gistentryvector 05 druhý argument obsahuje ukazatel na velikost vrácené datové struktury 14, 15 vytvořeńı prostoru pro výstupńı strukturu pageunion a jej́ı naplněńı prvńım prvkem vektoru. 12, 19 naplněńı struktury cur (iterace po prvćıch gist vektoru, který obsahuje prvky typu datum (v tomto př́ıpadě ukazatele na typ box2dfloat4) 21-28 hledáńı nejmenš́ıho možného obdélńıka obsahuj́ıćıho všechny zadané body 33 předáńı ukazatele na výstupńı strukturu každá funkce podporuj́ıćı gist index se muśı nejdř́ıve zaregistrovat jako postgresql funkce a potom všechny relevantńı funkce ještě jednou objev́ı v registraci gist indexu: createfunction lwgeom_gist_union(bytea, opaque_type) returns opaque_type as ’@module_filename@’ ,’lwgeom_gist_union’ language ’c’; 01 create operator class gist_geometry_ops 02 default for type geometry using gist as 03 operator 1 << recheck, 04 operator 2 &< recheck, 05 operator 3 && recheck, 06 operator 4 &> recheck, 07 operator 5 >> recheck, 08 operator 6 ~= recheck, 09 operator 7 ~ recheck, 10 operator 8 @ recheck, 11 operator 9 &<| recheck, 12 operator 10 <<| recheck, 13 operator 11 |>> recheck, 14 operator 12 |&> recheck, 15 function 1 lwgeom_gist_consistent (internal, geometry, int4), 16 function 2 lwgeom_gist_union (bytea, internal), 17 function 3 lwgeom_gist_compress (internal), 18 function 4 lwgeom_gist_decompress (internal), 19 function 5 lwgeom_gist_penalty (internal, internal, internal), 20 function 6 lwgeom_gist_picksplit (internal, internal), geinformatics fce ctu 2007 89 postgis pro vývojáře 21 function 7 lwgeom_gist_same (box2d, box2d, internal); \ 01 create operator class gist_geometry_ops závěr ćılem této práce bylo připravit podklady umožňuj́ıćı snažš́ı orientaci v implementaci standardu opengis v prostřed́ı o.s. databázového systémy postgresql – postgis. nejd̊uležitěǰśı komponenty systému postgis byly popsány, zbytek nikoliv. což ani nebylo ćılem práce. ačkoliv neńı pravděpodobné, že by někdo mohl navrhnout vlastńı rozš́ı̌reńı postgisu bez předchoźıch znalost́ı postgresql, c a vlastńıch gis aplikaćı, doufám, že d́ıky této práci mohou vznikat daľśı rozš́ı̌reńı postavené nad t́ımto, poměrně velice úspěšným produktem. literatura � správa časoprostorových dat v prostřed́ı postgresql/postgis, antońın orlík � http://postgis.refractions.net/docs/ � návrh a realizace udf v c pro postgresql1 � access methods for next-generation database systems2 � spatial data management3 1 http://www.pgsql.cz/index.php/n%c3%a1vrh a realizace udf v c pro postgresql#n \ .c3.a1vrh vlastn.c3.adch datov.c3.bdch typ.c5.af 2 http://citeseer.ist.psu.edu/rd/0%2c448594%2c1%2c0.25%2cdownload/http://citeseer.ist \ .psu.edu/cache/papers/cs/22615/http:zszzszs2k-ftp.cs.berkeley.edu:8000zsz%7emar \ celzszdisszszdiss.pdf/access-methods-for-next.pdf 3 http://www.mapbender.org/presentations/spatial data management arnulf christl/spati \ al data management arnulf christl.pdf geinformatics fce ctu 2007 90 http://postgis.refractions.net/docs/ http://www.pgsql.cz/index.php/n%c3%a1vrh_a_realizace_udf_v_c_pro_postgresql#n.c3.a1vrh_vlastn.c3.adch_datov.c3.bdch_typ.c5.af http://citeseer.ist.psu.edu/rd/0%2c448594%2c1%2c0.25%2cdownload/http://citeseer.ist.psu.edu/cache/papers/cs/22615/http:zszzszs2k-ftp.cs.berkeley.edu:8000zsz%7emarcelzszdisszszdiss.pdf/access-methods-for-next.pdf http://www.mapbender.org/presentations/spatial_data_management_arnulf_christl/spatial_data_management_arnulf_christl.pdf _______________________________________________________________________________________ geoinformatics ctu fce 2011 149 role of interdisciplinary cooperation in process of documentation of cultural heritage jindřich hodač1, michael rykl2 1czech technical university in prague, faculty of civil engineering 2czech technical university in prague, faculty of architecture thákurova 7, prague 6, czech republic hodac@fsv.cvut.cz, rykl@fa.cvut.cz keywords: metrical documentation, building-historical research, photogrammetry, cooperation, education abstract: this paper is focused on presentation of results of long-term interdisciplinary cooperation in a process of documentation of cultural heritage. there are two sides joined in this cooperation. the first side is a ,,submitter” in our case it means architect-historian (mr. rykl). the second side is a ,,contractor” in our case it means surveyor photogrammetrist (mr. hodač and his students). we are cooperating mostly on projects of metrical documentation of culture heritage buildings and sites. our cooperation is realizing mainly in bachelor„s/master’s projects. other opportunity for our collaboration is our course [1] . we are offering this course to students of two faculties/specializations (surveyors + architects). beside the wide range of real results (2d drawings, 3d models, photomaps etc.) we also collected quite a lot of experience with process of collaboration itself. cooperation and communication of submitter and contactor are playing key roles for successful project. it is possible to generally expect that submitter will give the ,,task” and contractor will try to find proper technology to solve it. the process of communication should be permanent because new circumstances and findings are arising all the time. it is very important for all together to find common language across specializations to understand each other. surveyors are ,,slightly pressed” to get more knowledge about historical building constructions. architects-historians should get basic awareness about various recent technologies for metrical documentation and its ,,pros and cons”. 1. introduction the projects we are cooperating on are mostly practically oriented. course of our typical project evolved during period of our collaboration into a stable form. this form is showed in appendix 1. each side involved has a specific role in the project. what is different from common submitter – contractor relationship is very narrow and intensive cooperation before, during and after the project. both sides are highly motivated and they are following the same aim. contractors/students are softly dragged into process of building-historical research (bhr). finally, they clearly know what they are working on and what the purpose is. this situation helps them to activate their creativity and their ability to manage with emerging questions is gradually growing. 1.1 cooperation – main characteristics submitter is defining each task within the project with regard to the specific goal of the bhr. we (submitter+contractor) are then trying to find appropriate way of record to meet the goal. this process of clarifying the form and content is continuous. in some cases it leads to usage of very complex technologies (e.g. laser scanning, optical correlation systems etc.) and in other cases only very simple methods are used (e.g. image rectification). wide range of technologies is available today. from technological point of view nearly ,,all is possible”. our approach in this technological area is quite pragmatic. it means we are searching for technologies that are as simple and yet fully solve the task. effectiveness of means used in projects is one of important parameters we are following. our most common approach is a combination of various documentation methods. our communication has often a form of dialog. submitter is making a goal-oriented probe into the subject of research and based on that he is defining clear questions to contractor. dialog leads us through project systematically. partial outputs of project help submitter to ,,understand deeply” during process of bhr and then to define tasks for next step. from this point of view the whole process is alive, variable, trial-error oriented but following main goal of the project. understanding each other is key point of communication. specializations involved have their own terminology, own language. first essential step is to find the same level of conversation, which is clear for both sides. crossing of borders of specializations is necessary as well as ability of attentive listening and patient explaining. only under these mailto:hodac@fsv.cvut.cz mailto:rykl@fa.cvut.cz _______________________________________________________________________________________ geoinformatics ctu fce 2011 150 conditions of mutual interaction we can achieve the state when submitter is able to specify ,,what he really needs” and contractor is able to find and simply explain ,,how to do it”. the above-described type of relationship should reduce or even remove some kind of impatience to new technologies on the historian‟s side and kind of blind fascination to same technologies on the surveyor‟s side. cultivation of the ability to have a ,,health distance” from own specialization is useful ingredient in this process. 1.2 cooperation – types of projects projects we are collaborating on can be divided into four basic types. each type has its own specifics. various kinds of activities are usually blending in a project but one of them is always dominant. the main activity is in a direct relation with the main goal of the project. the first type of our projects is ,,research”. it covers projects which are purely focused on research in the area of bhr. most of them have a form of dialog and documentation results are innovative. the second type is ,,support”. this includes projects that are focused on a creation of metrical documentation as a support of standard bhr process. results of these projects are common types of metrical documentation. the third type is ,,emergency”. such projects are focused on emergency documentation of details or complex of buildings and sites. the fourth type of our projects is ,,presentation”. the main purpose of these projects is presentation of the bhr results. all these types will be discussed in more detail in the following chapters. 2. research projects this type of projects is mainly focused on verifying of hypotheses about building/site development. hypotheses are defined by submitter during a process of bhr. topic of project can be e.g. reconstruction of geometric shape of parts of buildings which were destroyed during ages. this reconstruction is then created on the base of precise metrical documentation of their rests. various methods are used for documentation in this case and most common output has a form of 3d model. second type of topics is focused on precise documentation of parts of buildings which exist in original state but their shape and its geometry is not precisely known. research targeted on geometry of vaults is example of this type of project. communication submitter-contractor in these projects is the most intensive. the course of projects is continuously modified based on the partial outputs. the main features of this type of projects are summarized in table 1. table 1: research projects main features 2.1 example 1 – geometry of existing vault this project was focused on verification of hypothesis about construction of vault of gothic hall in a small fortress in central bohemia [2]. laser scanning technology was used as a main documentation method. various types of outputs were created in near cooperation with submitter or directly on his demand. construction process of this vault was finally clarified by submitter with the help of documentation results. this project was presented in conference of bhr in form of a dialog between submitter and contractor (questions-answers). various types of results are presented on figures 1 and 2. _______________________________________________________________________________________ geoinformatics ctu fce 2011 151 2.2 example 2 – geometry of destroyed vault these two projects were focused on verification of hypothesis about geometric shape of vaults which were not preserved. vault of scullery in a gothic fortress in southern bohemia was topic of the first project and vault of a pulpit of a romanesque church in western bohemia was the second topic. combination of methods was used in both projects. stereophotogrammetry and optical correlation system were used as main methods for precise documentation of rests of vaults. reconstruction of hypothetic shape of vault was created in narrow cooperation and with great help of submitter and enriched our knowledge about historical development of these buildings. results of projects are shown on figures 3 and 4. figure 1: analysis of vault – 3d model figure 2: analysis of vault – contour lines figure 3: reconstruction of scullery vault figure 4: reconstruction of vault of church pulpit _______________________________________________________________________________________ geoinformatics ctu fce 2011 152 3. support of bhr projects this type of projects is mainly focused on creation of a quality-fundament for bhr process. parameters and forms of outputs are clarified during submitter-contractor discussions. this type of communication can continue throughout the whole course of a project and it is leading to results which are highly customized to the submitter‟s needs. results are almost immediately used for bhr done by submitter. combinations of various methods are used for documentation in this case and the most common output has a form of 2d data (e.g. photomap). methods used are mostly more simple then in other types of projects. processing of results is sometimes done partly by contractor with help of submitter, it means, that both sides are slightly pushed to cross rigid borders of their specializations. the main features of this type of projects are summarized in table 2. table 2: support projects main features 3.1 example 1 – photomap and its interpretation this project was focused on a creation of photomaps of part of facades of a small fortress in central bohemia [3]. standard workflow of single image photogrammetry was used and photomaps were created. the second step of project was building historical interpretation of the content of photomaps. intensive submitter-contractor cooperation was necessary during early parts of this period. quality check done by submitter was the final step. example of a result is shown on figure 5. 3.2 example 2 – photomap with cross-section this project had similar assignment as the project presented above. building of interest and methods used were also the same. the second step was different. introjections of cross-section into photomap were demanded by submitter to understand more deeply spatial composition of the selected parts. key communication submitter-contractor was done during period of fieldwork when parameters of cross-section were clarified in situ. standard surveying methods were used for cross-section documentation. example of a result is shown on figure 6. 4. emergency documentation projects this type of projects is mainly focused on emergency documentation of buildings/sites and its parts at risk. time, work safety and technical conditions on site play key roles in these projects. demand for documentation is formulated by submitter. it is really necessary to discuss and brightly identify priorities of project and parameters of results. technologies for quick collection of maximum data are commonly used because of circumstances of such projects. laser scanning and optical correlation systems are widely used and results are in a form of 3d model. conditions during data collection process are often difficult (not much space, not much light, time press etc.). it has some influence on data quality but mostly it is not possible to wait for better conditions in field (e.g. archeological prospecting with excavator above head). close submitter-contractor cooperation is necessary in a process of search for effective technology of documentation. high end technologies as e.g. laser scanning are not available and also not convenient at every time for various reasons (budget etc.). the main features of this type of projects are summarized in table 3. _______________________________________________________________________________________ geoinformatics ctu fce 2011 153 table 3: emergency projects main features figure 5: interpretation of photomap figure 6: photomap with cross-section 4.1 example 1 – stucco decoration of vault this project was focused on emergency documentation of the most valuable parts of stucco decoration (putti) of a baroque vault in a castle near prague. optical correlation system was used as a main documentation technology. very detailed 3d models of putties were created and also complex model (not so detailed) of the whole vault was another result. slow destruction of the vault and its decorations were discovered during the course of the project. partial results of processing were discussed and high emphasis on punctuality of documentation from side of submitter opened necessity of next phases of fieldwork. this process of continuous regimentation led to very high quality outputs. example of a result is shown on figure 7. 4.2 example 2 – archeological site this project was focused on emergency documentation of archeological site in the centre of prague [4]. laser scanner technology was not available, just optical correlation technology was used similarly as in the first project. huge amount _______________________________________________________________________________________ geoinformatics ctu fce 2011 154 of image data was collected and they are still processed step by step. conditions during fieldwork were not ideal (time press, light problems) but detailed 3d model of part of the site was already created in a high quality. close cooperation, help and patience was necessary mainly during the onsite work (many people in small space etc.). documented ruins were destroyed few days after last fieldwork. collected data are from this point of view very valuable source of information for the future. example of a partial result is shown on figure 8. figure 7: textured 3d model of putti figure 8: 3d model of part of matthew tower 5. presentation projects this type of projects is mainly focused on illustrative presentation of research outputs. standard graphic form of bhr outputs is two dimensional (drawings, schemes etc.). visualization (3d model) of findings gives better idea about spatial relationship of different parts of building/sites. submitter defines the main task of visualization. subsequent discussion with contractor leads to proposal of technology, parameters and forms of results. existing data sources are mostly combined with supplemental measurement (different simple methods) in situ. submitter is fully involved in the process of creation of final results. these results in some cases reveal the necessity of partial bhp improvement. the main features of this type of projects are summarized in table 4. table 4: presentation projects main features 5.1 example 1 – reconstruction of historical appearance of a fortress this project presented results of bhr of a part of a small gothic fortress in southern bohemia [5]. 3d model was created using existing 2d drawings (earlier metrical documentation), results of bhr and simple measurement in building. measurement by tape was performed in order to improve the above mentioned 2d drawings. detailed photodocumentation was also taken. 3d model displays a hypothetic state of the building during researched historical period. _______________________________________________________________________________________ geoinformatics ctu fce 2011 155 results of project were presented together with other results of bhr on a specialized seminar. example of a result is shown on figure 9. 5.2 example 2 – development of ramparts this project presented results of bhr of ramparts of a gothic fortress in central bohemia [5]. intersection photogrammetry was used as a method of documentation. the output was a 3d model of actual state of the area of interest. this model was combined with 2d drawings (bhr outputs). projection of these drawings to the 3d model was done in a narrow cooperation between submitter and contractor. final 3d model allows better understanding of building development. created model became one of important sources for reconstruction of appearance of the fortress in various historical eras. these reconstructions were done by submitter consequently. example of a result is shown on figure 10. figure 9: reconstruction of a part of a fortress figure 10: historical development of ramparts 6. conclusion we can say that described type of cooperation is leading to results of a very good-quality. yes, it is true, that it is quite time demanding for all, but specialists from both sides are enriched and finally they are very satisfied with the project and its results. we cannot expect that in a real life the course of the projects will always run as ideally as we are practicing. we are trying to show to our students the way how to do it, what is important in the process and last but not least how to make interdisciplinary collaboration successful. 7. references [1] hodac, j., rykl, m.: metrical documentation of historical buildings – joined course of faculty of civil engineering and faculty of architecture ctu in prague, desta 200ř, nečtiny, march 200ř, lfgm.fsv.cvut.cz/~hodac/dokumenty/hodac_nectiny2008.pdf, 2011-05-29 [2] rykl, m., sunkevic, m., hodac, j.: gothic vault of hall of fortress in popovice, proceedings of 5. conference of bhr, znojmo, june 2006, 365-380 [3] hodac, j.: simple photogrammetry in applications of culture heritage documentation, proceedings of workshop of photogrammetry and remote sensing, telč, november 200ř, 51-56. [4] frydecky, l., hodac, j.: creation of 3d model of archeological site, proceedings of 10. conference computer support in archeology, dalešice, may 2011, in print. [5] rykl, m.: composition of residential part of medieval fortress in bohemia, dissertation thesis, prague, ctu faculty of architecture, 2010. [6] pavelka, k., tezníček, j.: new low-cost 3d scanning techniques for cultural heritage documentation. in proceedings of the isprs xxi congress [cd-rom]. beijing: isprs, 2008, vol. 8, p. 222-225. issn 1682-1750. [7] pavelka, k.,tezníček, j. hanzalová, k. prunarová, l.: non expensive 3d documentation and modelling of historical objects and archaeological artefacts by using close range photogrammetry. proceedings of workshop on documentation and conservation of stone deterioration in heritage places 2010 [cd-rom]. amman: cultech for archeology and conservation, 2010 _______________________________________________________________________________________ geoinformatics ctu fce 2011 156 8. appendix appendix 1: course of typical project activities of all parties involved are displayed on right side of table. ________________________________________________________________________________ geoinformatics ctu fce 2011 132 integration of hybrid outdoor and indoor surveying. a case study in spanish renaissance style towers. j.i. sánchez1, j.i. san josé2, j.j. fernández2, j. martínez2, j. finat3, 1 department of applied physics, ets arquitectura, 47014 valladolid 2 laboratory architectural photogrammetry, ets arquitectura, 47014 valladolid university of valladolid, spain, lfa@ega.uva.es 3 dept of algebra and geometry, ets ingeniería informática, university of valladolid, 47011 spain, jfinat@agt.uva.es keywords: architectural surveying, range-based methods, renaissance towers, conservation interventions. abstract: the fusion of different imageand range-based techniques is acknowledged as the best option for threedimensional surveying of objects displaying a complicated geometry at different scales and/or resolutions. a special case deserving still several challenges involves to towers which represent a compendium of constructive elements and, consequently, a large amount of problems related to conservation or maintenance interventions. in this work, we display an extended photogrametric approach which includes elements information systems in architecture with a special regard to structural analysis and some aspects of materials analysis. we illustrate our approach with several examples regarding a hybrid surveying of architectural renaissance towers which have been constructed in several agrarian zones of the kingdom of castilla (spain) along 16th and 17th centuries. 1. introduction. along the 16th century a large amount of towers were built in the old kingdom of castilla which combines gothic and renaissance styles in an original way. often, structural indoor elements are constructed following gothic tradition (mainly for vaults), but walls and outdoor elements are constructed following early renaissance style. besides the undeniable beauty of these hybrid solutions, this way of constructing poses a lot of important problems concerning to the behaviour of structural elements. indeed, traction efforts of gothic vaults cannot be solved by means of arbotants, and must be absorbed by buttress which are embedded inside the walls of the tower or are discharged along other structural elements of the church which are located along the feet of the church. this architectural solution would explain the peculiar location of the only renaissance tower with regard to precedent solutions (a double tower, typically) of the gothic style. from the middle of the 16th century, liturgical needs emanating from the trento concilium are in the origin of changes involving internal spatial organization of churches which have been solved with new techniques and styles. besides the construction of vestries in the head of churches, baptisteries were built in their opposite side. because of this, very often one can find a direct and open communication between church and the lower part of towers, where the baptismal font is located. the choir acquires a higher relevance than in precedent centuries, by increasing the church size. it is located at feet of temple, and must incorporate organ and singers in charge of giving more solemnity to the new liturgies. very often, these activities are located inside of towers, and more specifically the first floor of towers. all these new functionalities introduce additional constraints on the design and readaptation of internal spaces, which modify the interplay between spaces and pose challenges involving the behaviour of different structural components. besides a discussion about the style and smart constructive solutions, this approach poses intrincate problems for restoration or rehabilitation interventions. indeed, constructive solutions are only partially known and some elements are not physically accessible, such as the space located between the visible vault and the floor. the presence of fissures or cracks along some walls of these towers displays structural elements in visible or hidden elements. in addition, the composition of materials inside the walls supporting compression efforts is sometimes non-homogenous, or some old interventions have been performed by using non-appropriate elements. reinforcement of structural elements and replacement of components must be carefully planned to avoid a global crash of the whole fabric. solutions to be performed require a very careful multiresolution surveying and depend in a strong way of the implicit knowledge of experts in architectural restoration. an extensive recent treatment with a large diversity of techniques is included in [1] ________________________________________________________________________________ geoinformatics ctu fce 2011 133 figure 1: 1) tower of santiago‟s church, castrillo de murcia, spain. 2) md4-100 capturing aerial images. three-dimensional surveying is acknowledged as the most appropriate solution for visualizing interventions: it includes all planar representations which can be obtained from a sectional analysis and provides a support for additional reports. the lack of verticality between structural elements (columns, e.g.) can be obtained by traditional photogrametric methods which provide usual cad representations as an output. the evaluation of effects related to vertical elements has been emphasized by several authors with applications to religious and civil buildings (see [2] and references therein), with a methodology very similar to ours. the range information provided by laser scan devices provides discrete models with a density which can be selected by user in capture process and sampled in different ways, including superposition of textures obtained from high resolution views, depending on the facilities of the software application. in our case, from the identification of a direction the uvacad software platform allows to obtain a collection of plane sections with a distance between them selected by user; the superposition of a sequence of parallel “slices”. to achieve these goals, we have cut out a collection of horizontal and vertical slices on the 3d model which has been performed with uvacad (see fig. 2). these slices can be orthogonally projected on the dominant planes of the involved façades. projected slices of the point‟s clouds allow to draw profiles for each level of the tower. the resulting documentation is exported to autocad where we represent the geometry and shape of front views for each façade. the availability of metric functionalities on autocad allows comparing and evaluating small differences between parallel sections or profiles corresponding to each involved element. the space limitations for optimal localization of devices (laser scans, typically), the overlapping of structural elements (self-occlusive stairs, e.g.) or the existence of hidden components (inside the walls or the slabs, e.g.) in a very reduced space make more difficult the surveying and the interpretation of the tower. for solving these problems, we have developed several software solutions which provide a support for their integration in a global model and provide a support for interpreting it, before assessing the interventions to be performed. these software tools concern to a projection of image-based information on a common model, a semi-automatic recognition of elements (walls and simple vaults) for evaluating structural defects, an integration of non-intrusive techniques in 3d models for materials which can be displayed to several levels of detail, a modular representation of components as support for a future taxonomy and a visualization tool able of integrating information linked to structural components and materials characteristics. the rest of the paper is organized as follows. section 2 is focused to give a methodological overview able of integrating documentation, information and management systems as successive stages in regard to conservation and rehabilitation tasks and their application to towers. section 3 provides some remarks relative to the used non-destructive techniques which are superimposed to the information system. section 4 is devoted to the main problems from the structural viewpoint which imposes the most meaningful constraints regarding the intervention agenda. section 5 illustrates our approach with several examples arising from renaissance towers in burgos and palencia (spain). section 6 advances some conclusions and sketches the next steps to be done in future development. ________________________________________________________________________________ geoinformatics ctu fce 2011 134 figure 2: point cloud processing by uvacad software. 2. a layered approach: dim systems in layered approaches from geographic information systems (gis) each document provides the support for different kinds of information linked to provide a support for computing and assessing interventions. the same methodology can be applied to any kind of objects which can be classified within three systems: 1) documentation systems for any kind of historic files and recent representation linked to topographic, mapping or volumetric aspects, and their mutual relations in the ambient space which are reference to an accurate geometry. in our case, all data relative to the building is referred to a volumetric representation which allows recovering any kind of information which is managed by simple queries in a kd-tree. the main goal is to provide a support for tracking building temporal evolution, related to the methodology of [4]. 2) information systems in architecture focused towards deconstructive process with an emphasis on technicalconstructive elements involving the structure and materials. its functionalities include semi-automatic or manual insertion of relevant data for each layer referred to raster or vectorial data, basic computations relative to metrics (lengths, areas, volumes) or annotations relative to involved materials in their localizations in a very precise way. the main goal is the development of knowledge of the whole building including structural and materials analysis of the fabric, in their context. there is an increasing number of information systems for cultural heritage applications, including archaeological sites (see [5]). 3) management systems for constructive process, including formal factors involving the architectural linguistic issues (style, original constructive elements), normative frameworks and choice of the best strategy for intervention. the connection with information system includes some functionality linked to the representation and management of data which are subject of intervention. in particular, it is necessary to manage a basic statistics relative to performed measurements, a currently manual identification of critical values, software tools for multicriteria analysis and tools for tracking interventions. the main goal is to provide an assistance for agenda planning and tracking conservation or rehabilitation interventions on the building, according to internal constraints (technological aspects) and external ________________________________________________________________________________ geoinformatics ctu fce 2011 135 requirements (arising from the client). it is necessary to develop a good methodology for integrating documentation, information and management systems in regard to rehabilitation and conservation policies. figure 3: volume rendering and building elements of st. ana‟s tower in peñaranda de duero, spain. figure 4: graphic drafting process from the point cloud model. a deconstructive or top-down methodology provides the nexus between the above systems. in this work we shall concentrate on those aspects of information system which are strongly related with structural aspects. as the first step, it is necessary perform a 3d surveying of outdoor and indoor spaces, and match together the resulting models for obtaining the current state of the whole building. furthermore, it is desirable to refer the structural and material characteristics of the fabrics -including the observed pathologiesto the resulting model, in such way that one can activate analysis, diagnoses and possible interventions by activating different layers involving outdoor and indoor ________________________________________________________________________________ geoinformatics ctu fce 2011 136 representations. nevertheless the profusion of decorative elements, the exterior surveying is performed by means of a standard combination of imageand range-based modelling, including eventually some information arising from aerial photography. interior environments display a better accessibility, but on the opposite side, indoor typologies are not homogeneous and they display several architectural characteristics, with different constructive solutions (involving charge walls, vaults, stairs, e.g.) which are combining between them in a complex way. figure 5: structures and decorations of a tower building system. 3. a conceptual framework for non-destructive techniques a conceptual framework must solve semantic aspects concerning to the intervention project. these issues concern to classification criteria, the identification of the most appropriate hierarchy and the choice of instruments to be applied before executing interventions. they concern to the choice of semior non-destructive techniques (sdt or ndt in the successive), and the software tools for their management. sdt are focused towards identifying constructive pathologies an store information from different kinds of waves linked to visual or acoustic information captured with different kinds of (non-contact or contact) devices. nevertheless the importance of acoustic devices for materials composition and extensometers for evaluating efforts, we shall concentrate our attention in imageand range-based devices. additionally, this choice is justified in our case by the quasi-homogeneous character of walls supporting compression efforts, and the relatively good state of conservation in the fabrics. visible or non-visible spectrum from such devices is stored in different formats referred to a common 3d representation of the object. in our hierarchical approach, results arising from the application of ndt are stored by the management system which frame the information according to the corresponding ontology. our ontology has several layers, involving metadata and more specific lexicon, thesauri and taxonomies for the knowledge management system (kms). the 0th layer corresponds to dublin core standards for making easier the interoperability with other software applications. the 1st layer is based on a thesauri for cultural heritage which has been developed for solving accessibility issues in the framework of the singular strategic project patrac, but which extends traditional accessibility issues to more general aspects involving interventions in cultural heritage buildings. a description of our kms can be found in [5]. software tools currently in development follows a two-step strategy. 1) quasi-static webgis which includes multi-resolution functionalities such as showed in [6,7]; our additional contribution consists of introducing a scalable vector graphics for the management of geometric and materials entities. this scheme has been applied in the portics of the cathedral of leon. 2) dynamic webgis which is focused on the provision of web services including advanced visualization tools for representing, monitoring, tracking, validating or correcting the performed interventions. some geospatial technologies related with our work have been proposed as standards by the opengis consortium and they are currently in development for geomatics (io/tc 211) or interventions at larger scale. 4. application for surveying structural and materials problems most structural problems can be detected from planimetric information by means of a comparison of sectional representations given by planes which are perpendicular to one of the principal axis of the architectural object. nevertheless their volumetric effects, main structural problems concern to compression and traction problems holding ________________________________________________________________________________ geoinformatics ctu fce 2011 137 along principal planes. traditional decompositions of efforts provides a support for identifying efforts in different components trying to validate models and evaluate solutions before executing the most appropriate interventions. usual rehabilitation solutions are planned and executed following parallel or perpendicular directions to effort direction. this is a natural 3d extension of approaches based in region decomposition which can be distributed according to façade characteristics. figure 6: damage indication and treatment in the facade of san gregorio, valladolid, spain. however, this is not the only meaningful case because torsion effects cannot be represented in a planar way. torsion effects are especially meaningful in spiral stairs which are the most common ones inside towers. often, all steps share a vertical support given by a central column which discharges compression effects against the ground along the vertical; however, the other extreme of each step is coupled to lateral walls and generate an additional effort distributed along contact points of each spirals in a cylinder which must be compensated with a reinforcement of structural elements. most cases, visual defects (fissures or cracks, e.g.) arise from structural failures relative to uncompensated efforts. the lack of homogeneity of components, biological degradation (deposition of organic residues, e.g.), or damages due to atmospheric conditions are the main responsible. their accumulation can produce undesirable effects which have to be reviewed in a periodic way. their tracking is easier with an information system which has been applied since more than ten years in different monuments. figure 6 displays a template with different thematic layers for detecting and annotating problems at the façade of san gregorio (valladolid, spain) or the gothic portics of the cathedral of leon (spain). first approach was manually performed and requires a post-processing work, whereas the second one was annotated in a portable laptop. 5. some meaningful examples the analytical process of the buildings showed in this section requires the generation of 3d models from raster data and geometric descriptions specified by plants, front views and cross sections. digital files are constructed in autocad, where a structural ordering linked to geometric primitives can be performed for all elements appearing in the architectural configuration of towers. an essential graphical resource for drawing is given by sections which allow displaying the articulation between interior spaces of the tower, and the relative disposition between elements and constructive systems. furthermore, a section-based representation allows explaining the relation between architectural spaces and the articulation between levels and hollows. for achieving an understanding of such complex architectural components it is necessary to generate multiple sections following different orientations and disposals. the deletion of some parts of internal walls and structures makes possible the understanding and organization of the whole fabric. an axonometric perspective allows understanding the relative disposition between parts (which are partially occluded in real models), drawing stairs and identifying their structural role for communicating different floors. furthermore, this representation supports structural information involving slabs and vaults covering tower spaces, and typical shapes which characterize an architectural style, between others. a related methodology for another kind of materials was developed in [7]. the graphical representation provides is useful not only for interactive visualization but also for ________________________________________________________________________________ geoinformatics ctu fce 2011 138 analytical representation supporting geometric information. then, it is possible to understand formal and constructive elements involving the space use through the tower organization, and to translate them through the drawing. in this way, drawing becomes an element to improve the knowledge of the building as a whole which includes three aspects related to the complex organization of towers: composite laws which establish the articulation between elements which are part of towers. knowledge of systems / elements which are grouped in the constructive definition of each building. paraments organization by means of the analysis of details, the fabrics composition and the relative disposal and shape of hollows. in addition, the performed analytical process has allowed to study modifications and alterations along the whole life cycle, thought vestiges and evidences which are currently present in the building. this study makes possible to draft 3d representations which reassemble the description of precedent states, and allow understanding the building evolution through the comparison of vestiges with the current state. moreover, the photogrametric work performed at field work becomes especially useful for the paraments analysis, not only for its evocation capability between observation and specification of punctual aspects, but because it provides the data support for constructive characteristics of materials. last ones include specific features for wall materials, and graphical textures which have been used in constructing or remodelling towers. figure 7: analysis of current and ideal state of the santa ana‟s tower in peñaranda de duero, spain. the analysis of local details involving ashlars and rough stones to solve the tower parameters is dealt with three techniques. 1) a direct reading of the coloured point clouds which have been processed for generating vertical or frontal views in the most appropriate resolution. 2) a rectification of views with asrix, which provides the assembling way of fabrics, as much as their texture and colour constrained due to the lack of coplanarity. 3) a drawing reactivation (homograft) to translate the formal aspects involving the walls fabrics. 6. conclusions surveying of renaissance towers poses a difficult challenge due to the diversity of constructive solutions, and the relatively small environments for capturing data. some of the most important problems concern to hidden parts in documentation phase, and the lack of enough data for a complete structural or materials analysis. it is necessary to work and take decisions under incomplete information, by minimising risks along the intervention. far from being a ________________________________________________________________________________ geoinformatics ctu fce 2011 139 particular case, the typology of renaissance towers poses a benchmark for essaying and validating an integrated methodology. the proposed methodology in this paper makes use of homographies for analysing details and pathologies in walls, combining inputs from different sources such as image and range information. this goal also requires continuous feedback between documentation, information and management systems in a common framework for planning interventions which is the main challenge we are currently addressing. figure ř: photo grinding walls in the tower of the santa marina‟s church in villadiego, spain. 7. acknowledgements this work is partially supported by the micinn (spanish ministry of science and innovation) within the adispa project bia2009-14254-c02-01. 8. references [1] structural analysis of historic constructions (2 vols), 7th international conference on structural analysis of historic constructions, sahc, october 6-8, 2010, shanghai. [2] f.guerra, l.pilot and p.vernier: “the facades of gothic buildings in venice: surveys verifying construction theories”, cipa xx intl symp., (torino, 2005). [3] drap et al: “photogrammetry and archaeological knowledge: toward a 3d information system dedicated to medieval archaeology: a case study of shawbak castle in jordan”, 3d areach (2007, switzerland) [4] s. f. el-hakim, j.-a. beraldin, l. gonzo, e. whiting, m. jemtrud, v. valzano: “a hierarchical 3d reconstruction approach for documenting complex heritage sites, xx cipa intl symp,. torino 2005. [5] cadenas. p, garcía-tomillo. j., rodríguez-cano. g and finat. j. software interoperability and friendly interfaces for assessing interventions in cultural heritage domains. 1st international workshop on pervasive web mapping, geoprocessing and services, webmgs 2010, isprs archive, vol xxxviii-4/w13. [6] e.s. malinverni, g. fangi, g. gagliardini: “multiresolution 3d model by laser data”, isprs vol. xxxiv, part 5/w12 [7] fernández-martin, j.j., sanjosé, j.i., martínez, j., finat, j. multirresolution surveying of complex façades: a comparative analysis between digital photogrammetry and 3d laser scanning, cipa xx intl symp torino, 2005 [8] a.anzania, l.binda, a.carpinteri, s.invernizzi, g.lacidogna: “a multilevel approach for the damage assessment of historic masonry towers”, j. of cultural heritage 11 (2010) 45ř–470 ___________________________________________________________________________________________________________ geoinformatics ctu fce 249 new low-cost automated processing of digital photos for documentation and visualisation of the cultural heritage karel pavelka, jan reznicek czech technical university in prague, faculty of civil engineering, laboratory of photogrammetry, thakurova 7, prague 6, 166 29, czech republic tel.+420224354951, e-mail: pavelka@fsv.cvut.cz keywords: culture heritage, photogrammetry, point cloud, 3d modeling, camera calibration abstract: the 3d scanning is nowadays a commonly used and fast technique. a variety of type’s 3d scanners is available, with different precision and aim of using. from normal user ´s point of view, all the instruments are very expensive and need special software for processing the measured data. also transportation of 3d scanners – poses a problem, because duty or special taxes for transport out of the eu have to be paid and there is a risk of damage to dismantling of these very expensive instruments and calibration will be needed. for this reason, a simple and automated technique using close range photogrammetry documentation is very important. this paper describes our experience with the software solution for automatic image correlation techniques and their utilization in close range photogrammetry and historical objects documentation. non-photogrammetrical approach, which often gives very good outputs, is described the last part of this contribution. an image correlation proceeds well only on appropriate parts of documented objects and depends on the number of images, their overlapping and configuration, radiometrical quality of photos, and surface texture. 1. introduction documentation and visualization of historic monuments has long been one of the major components of heritage conservation. traditionally, it was used to be a domain of geodesy and photogrammetry in the structuralhistorical research. at the turn of the century, a 3d scanning method was added. the later technology quickly became one of the major methods for documenting complex shapes. generally, 3d scanning includes a large variety of techniques for creating data that are represented by point clouds. the best known method is laser scanning using ranging scanners which can collect data automatically with mm precision (object points in real 3d coordinates) from a large area. most of the present methods have several important characteristics: they are slow, difficult to operate, expensive, or all together. such a system may not give accurate and complete results from the user´s point of view. there are the requirements to be satisfied a low cost system, simplicity, easy transportation and, if possible, – fully automatic data processing. of course, this must provide basic common functions for data usage. in this regard, the photographic systems are suitable. 2. laser scanning 2.1 requirements and measurements laser scanning was used to measure non-selected data. it means we can only influence the density of measurements in a regular grid. the main aim of laser scanning is to create an accurate surface 3d model. generally, laser scanners are very expensive and sensitive instruments –thus there can be problems with transportation and calibration at distant sites. it is necessary to use very sophisticated and expensive software for processing the measurements. the measured point clouds are processed by meshing functions to a mesh of irregular triangles. the current laser scanners are equipped with calibrated digital cameras for texturing the laser scanning model by using of photogrammetrical images. the processing of measurements results in a rendered virtual model of the documented object. for object modeling hundreds of millions 3d points are typically used. mailto:pavelka@fsv.cvut.cz ___________________________________________________________________________________________________________ geoinformatics ctu fce 250 figure 1: complex documentation of charles bridge vault in prague obtained by laser scanner callidus figure 2: laser scanning system leica hds3000: scanning of baroque statue ___________________________________________________________________________________________________________ geoinformatics ctu fce 251 2.2 current photogrammetric systems photogrammetry in the classical approach offers selected measured data. stereo-photogrammetry or intersection photogrammetry is used. important object points can be defined in a model by using mouse-clicking. precise stereophotogrammetric systems require special hardware equipment and software, and generally, they are not intended for unprofessional users. in aerial stereo-photogrammetry, a variety of automatic processes is available: automatic target matching for marked control points or fiducial finding, automatic tie-points finding for automatic aero triangulation (aat), or using image correlation techniques for deriving a digital surface model (dsm) from stereo-pairs with known external orientation. all these procedures are part of special stereo-photogrammetric software or software for aat and they have been in use since the nineties of 20th century, initially on unix-workstations and later on today‟s common improved personal computers. intersection photogrammetry was revived in the eighties, initially as a method using analogue photogrammetric images taken by a hand held calibrated film-camera, tablet, and software. later the software has been adapted for digital non-professional cameras; it has always been necessary to calibrate the camera to obtain appropriate results. certainly, the most popular software in this category is photomodeler as a low-cost solution for close range intersection photogrammetry. at first, as low-cost software it had only limited possibilities for data processing of images. in the last years, the processing options have been significantly improved by increasing the accessibility of quality computer technology and by related software development as well as by significant by increasing the digital camera quality and resolution. interesting results based on using image correlation and using high quality photos taken by calibrated cameras were expected in the last two years. 3. new photogrammetric process in the laboratory of photogrammetry of the czech technical university (ctu) in prague, we have developed our own experimental system based only on a common digital camera and image correlation software. the optical correlation scanner (oks) consists of one calibrated camera, photo-base with moving camera-holder, tripod, and software written in matlab language. oks was designed to be universal; by changing the base length, it is possible to measure both short-distance and long-distance objects (the base is variable up to 1 meter). in order to obtain good correlation, the images are taken with a very short step (usually 5 or 10 cm). the basic idea of this approach is such that each point should be matched on more than two images. by using two images only, there are no verification results. the third (or next) image point correspondence is necessary for validating the correctness of point match. the total precision and reliability of point match is increased by more image point correspondences. the output from this process is a textured point of cloud, which can be meshed and processed by using the software usually used for laser scanning. figure 3: image sequence for improving image correlation process ___________________________________________________________________________________________________________ geoinformatics ctu fce 252 figure 4: optical correlation system (oks) figure 5: final 3d model of one part of the relief with texture (baroque-age relief near velenice village in the czech republic) ___________________________________________________________________________________________________________ geoinformatics ctu fce 253 3.1 photomodeler scanner in the photomodeler scanner software, the photogrammetric method of multi-image intersection and bundle adjustment has been chosen. the intersection method demands a set of images captured around the object (or partially around it) from different positions for the measured points to be visible on at least two images captured from different locations. for external orientation and transformation to a local or national coordination reference system, the ground control points (gcp) were needed, or it is possible to enter a precisely defined distance only for scaling, which is often sufficient for small objects documentation. on the other hand, the image correlation works well on images with parallel axes of frame. in such case it was necessary to solve the problem of integration of images taken with parallel axes, which are unsuitable for intersection photogrammetry. the first functioning version was launched at the end of 2009, but it was still far from full use in practise. computing of all suitable image combinations took long hours on a common new computer. however, the process of referencing all images was still manual. the outputs were often heavily contaminated by noise. a new 2010 version is much better: the computing time is significantly reduced and the image orientation process (referencing) involves automatic procedures. it must be said that this photogrammetric system is based on the physical and mathematical basis of photography, central projection, and use of exact camera calibration. 3.2 case project sarcophagus as a small case project, a set of 15 images taken by calibrated canon 20d with 8mpix was used. for simple documentation the decorated sarcophagus beside the hercules temple on the jabal al-qal'a, also called amman citadel was selected. all images was taken from hand within two minutes only. after manual referencing of all images, only those with approximately parallel axes were processed to a point of cloud. within only a few minutes about 500 thousand object points from one decorated site were captured. the created point cloud was rendered with quality textures from images. all the images cannot be processed by automated image referencing – the overlap between images was not sufficient for creating complex model; thus only 8 images from the decorated part were processed. the photomodeler software computes cloud of points always from a pair of images only without control or verification and with many errors (noise). the software options are limited and it is clear that it will be enhanced. figure 6: sarcophagus ___________________________________________________________________________________________________________ geoinformatics ctu fce 254 3.3 camera calibration for image acquisition, a digital camera canon 20d and canon 10-22 mm lens were used. the use of an ultra-wide lens required a very precise calibration, for which several software solutions were used (photomodeler v6 and self-made software in matlab language). the parameters of focal length, principal point, radial and decentric distortions and chromatic aberration were calibrated in dependence on used lenses. the planar calibration field for sw photomodeler, extended with four points in space, was captured from 16 different positions. the images were firstly converted to the original bayer scheme with sw dcraw and then separated into three independent greyscale images (rgb) with the quarter of the original resolution. then the calibration process was made in sw photomodeler independently for each channel and the resulting parameters were saved to the common input file for next computing in sw matlab. next, the parameters were balanced in order to have one common focal length for all channels and also in order to take advantage of the whole image format after distortion repair (similar to idealization process in sw photomodeler – however, this software does not compute chromatic aberration). finally, the calibration protocols in matlab were generated and used for image optimization (idealization – creation of a new photo set without image distortion) of the photogrammetric images. figure 7: output from photomodeler software 4. non-photogrammetrical process 4.1 present state of the art the idea to derive 3d information from planar images is old and was used successfully in classical photogrammetry. there is not only a photogrammetric view of the matter; the image can be regarded as a signal. in this case the processing is possible without knowing the details of the camera used. automatic correspondences finding between images is used (sift scale invariant feature transform) and this process is used for panorama stitching or for automatic reconstruction of 3d scenes. the newly developed methods appeared first (around year 2000) in the theory of image processing and later in low-cost photogrammetry. the new approach is based mainly on automatic referencing of images to a model and automatic extraction of geometrical information. using only a set of common images processed by image correlation techniques to a point cloud has appeared after year 2000. nowadays, it is possible to find free software solutions or web-service like photosynth software (under microsoft) which creates panorama images or a virtual model using free client software. after processing of images, a virtual model is exposed on the web site. another software system, like arc 3d webservice (katholieke universiteit leuven) also makes a 3d model from the image sequence taken by a digital uncalibrated camera. this system consists of two modules: image uploader for image uploading to the server and modelviewer for ___________________________________________________________________________________________________________ geoinformatics ctu fce 255 processed object visualisation and its export to another format (for example vrml). the using of these services is very easy, but they are usually not intended for professional processing, and especially, indicate the current possibilities in automatic processing of photos. 4.2 technical process for automatic processing in the laboratory of photogrammetry of the ctu a special work-flow or technical process for easily applicable and simple historical objects documentation has been developed. the aims of this work being to provide a simple and cost effective documentation and presentation of monuments. after the input of images (they can be taken by various unknown cameras), bundler software is used. using the algorithm sift, the key points are identified (about 2000), and for each image it is determined if its major points are at other images (matching). automatic finding of correspondences between image pairs (sift) is one of the ways how to find the key points. a set of points arising from the key points in the images is refereed to as thin cloud of points. during the process of creating thin point clouds by sfm algorithms (structure from motion for unordered image collections), the internal and external camera parameters are computed. pmsv (patch based multi view stereo software) searches the same places on different photos and adds points to the thin point cloud. a dense cloud of points is created on which a fine object structure is visible. the cloud of points arising from this procedure was further modified in the program meshlab, which is an open source transferable and extensible system for processing and editing unstructured meshes. meshlab supports the processing of unstructured models arising for example from a 3d scan. it provides a set of tools for editing, cleaning, replacement, control, and rendering this type of data. for further research it is necessary to reconstruct a surface from the points automatically. another option is to load these points in a graphic studio (cad or other special software) and trace out all the elements. complicated programs are only for experts and they are usually expensive. both options require additional software and knowledge of how to work with these programs. however, this strongly limits further use. in order to address professionals (archaeologists and experts in monument care), it is necessary to create such a tool that can load the data (points) and treat them easily. we have created easy to use and intuitive software and this is the final part of the process described above. 5. results both photogrammetrically oriented and non-photogrammetrically oriented techniques for automated 3d model separation were tested. the new 2010 version of photomodeler catches with the losses in the automation trend in the recent years. its big disadvantage, however, is that it enables to create a cloud of points always from two images without any control. it results in a considerable amount of noise and model distortion. this problem is solved by system oks developed at the ctu in prague; it computes all points from all appropriate images, but it needs special equipment. the non-photogrammetrical approach computes the point cloud also from all appropriate images, and it does not require any special equipment and camera parameters. this system needs a large number of images with a big overlap. in our testing it has given the best and fully automated results. on the other hand, it is not just one complete system; it consists of several independent steps. 6. discussion using the non-metric digital camera for photogrammetry has revealed many problems in data processing. they can be seen especially in precise camera calibration, large number of images, and data processing. using photogrammetrical images for texturing the final 3d model is very important for visualization, but there are no special functions for radiometrical correction of photos which are common, e.g. for creating an orthophoto from aerial images. scanning large and complicated objects introduces problems with inaccessible parts and details lost in the data noise. better detailed resolution can be reached by using better quality images, better configuration of images taken at good illumination (very important), and, of course, with appropriate software, which, nowadays, is quickly progressing; with modern computers the technique and computation speed will soon be out-of-date. automatic photogrammetric and non-photogrammetric based techniques can be used both for simple documentation and creation of a 3d model. the advantage is in the whole procedure is based only on images taken by common camera (from the photogrammetric point of view by a calibrated camera). this seems to be very simple, but many issues are still to be addressed. thus only elementary objects with a good texture can be easily processed to obtain the 3d model. ___________________________________________________________________________________________________________ geoinformatics ctu fce 256 figure 8: scheme of non-photogrammetrical system for 3d modelling 5. conclusion the main result of the above mentioned process is an accurate triangular surface model. the high resolution views and 3d models were generated from digital photos only. the process based on classical intersection photogrammetry (represented here by photomodeler) has been recently supplemented with functions for fully automated image referencing and automated point clouds separation. in the last years, another non-photogrammetric approach has been applied; an automatic image processing method has been created, making use of signal processing, computer vision, and image processing. the main goal of this paper is to describe and test new documentation possibilities by using automated image processing. the use of new automated technologies has shown that they are useful in documentation of small historical objects or in archaeology and that they can also be used by non-professionals; however, there are still many problems due to noise, model texturing, and data joining. 6. acknowledgement this project is sponsored by the czech ministry of education research scheme msm6840770040 (ctu nr.3407401). output 1 software for transformation to vrml output 3 software for visualisation and data usage list of images +exif data bundler point cloud +camera parameters output 2 software for purging and colored geometry pmvs dense point cloud meshlab meshing, geometry input of images (snímky) ___________________________________________________________________________________________________________ geoinformatics ctu fce 257 figure 9: point cloud created from images figure 10: meshed point cloud rendered from original images ___________________________________________________________________________________________________________ geoinformatics ctu fce 258 7. references [1] lowe,d.g. object recognition from local scale-invariant features. n international conferenceoncomputervision, corfu,greece,pp.1150-1157. 1999 [2] lowe, d.g.:„distinctive image features from scale-invariant keypoints,“international journal of computer vision, 60, 2 (2004), pp. 91-110. http://www.cs.ubc.ca/~lowe/keypoints/ [3 hartley, r. i. zisserman, a. „multiple view geometry in computer vision, 2ndedition, “ cambridge university, 2004. [4] brown,d. lowe,d.g.„unsupervised 3d object recognition and reconstruction in unordered datasets“. http://research.microsoft.com/~brown/papers/3dim05.pdf [5] matas, j. , chum, o., urban, m., pajdla, t. :„robust wide baseline stereo from maximally stable extremalregions,“british machine vision conference, 2002. [6] mach, l. sift: scale invariant feature transform automatické nalezení korespondencí mezi dvojicí obrázků , http://mach.matfyz.cz/sift [7] pavelka,k., reznicekek,j.:culture heritage preservation with optical correlation scanner, 22nd cipa symposium, october 11-15, 2009, kyoto, japan [8] reznicek, j., pavelka, k., 2008. new low-cost 3d scanning techniques for cultural heritage documentation. the international archives of the photogrammetry, remote sensing and spatial information sciences.vol. xxxvii. part b5. beijing [9] vergauwen, maarten, arc 3d webservice [online]. 2009. . [10] snavely,n., seitz,s.m., szeliski,r.: photo tourism: exploring image collections in 3d. acm transactions on graphics (proceedings of siggraph 2006), 2006. [11] snavely,n., seitz,s.m., szeliski,r.: modeling the world from internet photo collections. international journal of computer vision, 2007. [12] dellaert,f. seitz, s. thorpe, c. thrun, s.: structure from motion without correspondence. ieee computer society conference on computer vision and pattern recognition, 2000 [13] berg,m.et al. computational geometry, algorithms and applications. 2. edition. berlin: springer, 2000.367s. isbn3-540-65620-0. [14] pavelka, k. tezníček, j. hanzalová, k. prunarová, l.: non expensive 3d documentation and modelling of historical objects and archaeological artefacts by using close range photogrammetry. workshop on documentation and conservation of stone deterioration in heritage places 2010 [cd-rom]. amman: cultech for archeology and conservation, 2010 [15] koska, b. using unusual technologies combination for madonna statue replication. proceedings of the 23rd cipa symposium [cd-rom]. praha: čvut, fakulta stavební, katedra mapování a kartografie, 2011, p. 5ř-66. isbn 978-80-01-04885-6. [16] pavelka, k. tezníček, j. koska, b.: complex documentation of the bronze equestrian statue of jan zizka by using photogrammetry and laser scanning. workshop on documentation and conservation of stone deterioration in heritage places 2010 [cd-rom]. amman: cultech for archeology and conservation, 2010 [17] kuemen, t. koska, b. pospíšil, j.: verification of laser scanning systems quality. xxiii-rd international fig congress shaping the change [cd-rom]. mnichov: fig, 2006, isbn 87-90907-52-3. [18] koska, b. kuemen, t. štroner, m. pospíšil, j. kašpar, m.: development of rotation scanner, testing of laser scanners. ingeo 2004 [cd-rom]. bratislava: slovak university of technology, faculty of civil engineering, 2004, isbn 87-90907-34-5. [19] svatušková,j.: possibilities of new methods for documentation and presentation of historic objects, phd thesis, ctu prague, faculty of civil engineering, 2011. http://mach.matfyz.cz/sift ________________________________________________________________________________ geoinformatics ctu fce 2011 118 metadata and tools for integration and preservation of cultural heritage 3d information achille felicetti1, matteo lorenzini2 1pin, università degli studi di firenze piazza ciardi 25, 59100 prato, italy achille.felicetti@pin.unifi.it 2iccu, ministero per i beni e le attività culturali viale castro pretorio 105, 00185 roma, italy matteo.lorenzini@beniculturali.it keywords 3d, digital repositories, metadata, open source, ontologies, cidoc-crm abstract: in this paper we investigate many of the various storage, portability and interoperability issues arising among archaeologists and cultural heritage people when dealing with 3d technologies. on the one side, the available digital repositories look often unable to guarantee affordable features in the management of 3d models and their metadata; on the other side the nature of most of the available data format for 3d encoding seem to be not satisfactory for the necessary portability required nowadays by 3d information across different systems. we propose a set of possible solutions to show how integration can be achieved through the use of well known and wide accepted standards for data encoding and data storage. using a set of 3d models acquired during various archaeological campaigns and a number of open source tools, we have implemented a straightforward encoding process to generate meaningful semantic data and metadata. we will also present the interoperability process carried out to integrate the encoded 3d models and the geographic features produced by the archaeologists. finally we will report the preliminary (rather encouraging) development of a semantic enabled and persistent digital repository, where 3d models (but also any kind of digital data and metadata) can easily be stored, retrieved and shared with the content of other digital archives. 1. introduction 1.1 cultural heritage and 3d technologies 3d modeling has become widespread in many areas of archaeological research and nowadays comprises a wide range of applications to cover every aspect of the archaeological work (e.g. landscape analysis, excavation area documentation, creation of digital representations of monuments and artifacts). after some initial skepticism, 3d seems to have finally seduced the cultural heritage experts and, especially in archaeology, 3d technologies are increasingly used not only for the typical operations of reconstruction and presentation to the public, but also for restoration and preservation purposes. the use of information technology to capture or represent the data studied by archaeologists, art historians and architects falls now under the name of virtual heritage, a brand new and fascinating branch of knowledge [1]. a lot of work has been carried out during the last decade and a huge amount of digital information has been produced. but at the same time, this rapid and uncontrolled growth has given rise to brand new issues, which must now be faced. one of the most common concerns, not only in the cultural heritage field, is interoperability. portability and integration of 3d information across different systems and over the web is very often impeded by the 3d acquisition and processing tools, mainly because of the proprietary (i.e. “closed”) data formats used to encode information, but also for the non-standard way in which these tools capture the provenance information of each digital model. other relevant obstacles regard the long and often too diversified pipeline necessary for the creation of the final 3d model: the whole encoding process is usually split into a series of complex operations (from digital acquisition to the final creation of the model, through the various processing steps) which are very often performed by different people at different locations and times. interoperability requires a solution for these kinds of problems and all our efforts are conveyed towards this purpose. ________________________________________________________________________________ geoinformatics ctu fce 2011 119 2. interoperability, open source and open standards 2.1 3d content and metadata formats the history of standardization of 3d data formats is longer and perhaps more complex than that of any other digital resource. one of the reasons behind this is that the 3d formats have been (and are even today) strictly bound to the various tools and software used in turn to acquire and process 3d information. most of these tools and software still provide proprietary file formats to encode the resulting 3d objects. as a result this practice makes 3d content very difficult to share and exchange. proprietary data formats have always created barriers to the integration among 3d data and the other 2d digital information and have for a long time impeded the creation of a suitable open and standard format able to encompass all the interoperability issues. popular 3d formats like dwg and 3dshape are still considered de facto standards, although they frustrate any possibility of efficient data sharing and exchange due to their “closed” nature. one attempt to create an open format was performed by adobe, who invented a way to embed 3d content into pdf documents. 3d pdf is now an iso standard (iso 32000-1:2008) enabling users to create their own 3d pdf library and related software [2]. but its range of use is very narrow due to its absolute lack of flexibility an d inadequacy in facing the huge demand of interoperability required by modern information technology when dealing with 3d. even if many attempts have been made to implement a conversion framework between (at least) the most popular formats, many issues of data loss still remain while converting from one 3d format to another. as mentioned above, a serious matter of non-integration affects the metadata generated by the 3d model creation pipeline (provenance data). in a very common scenario, the format of these metadata remains strictly related to the acquisition/processing tools. this information, which is usually very detailed and comprises in some cases of hundreds of data fields, is often encoded following ad hoc and non-standard schemas, making it impossible to compare it to and integrate with other similar information coming from different sources. 2.2 open formats and open standards for 3d encoding most of the possible solutions to all of the format problems mentioned above rely on the adoption of open standards and on their ability to guarantee the necessary portability and cross-compatibility of digital information. open standards can be used throughout the whole pipeline, from the acquisition/creation of the 3d model and the related metadata, to the processing and presentation operations, until its storage in digital repositories. many formats already exist and provide a good degree of standardization: the collada, for instance, was created to represent 3d models with a standard syntax [3] and many important applications, like google sketchup, natively support it; taxonomies like citygml [4] can describe specific elements of a 3d scene and their mutual relationships, and make them interact with the spatial elements of a geographic context; the new html5 specification will hopefully simplify the visualization of x3dencoded models on the web [5]. 2.3 metadata formats standardization is not only a matter of data formats, but it is something that can also be achieved on other levels, for example by providing 3d information with valuable sets of metadata. many improvements have been achieved in this field, particularly after the publication of various guidelines and recommendations addressing the importance of having good quality and standardized metadata for documenting 3d content, especially in the cultural heritage field. the contribution of the london charter for computer-based visualization of cultural heritage has been one of the most essential to overcome this issue [6]. today many ontologies and schemas are available which produce standard sets of metadata. most provide rich collections of classes and properties to capture every degree of granularity required for the description of 3d models and for their enrichment with annotations and other similar techniques. in particular cidoccrm and its derivatives are among the preferred ontologies for the representation of cultural heritage descriptive metadata [7], even if the difficulties in using these schemas impose on the developers a high degree of automation and the creation of extremely user-friendly interfaces. the world of digital libraries has always made use of various metadata schemas to describe the archived objects and if the dublin core model has always been preferred for its simplicity in encoding basic description of objects, nowadays the mets is becoming more increasingly used for the so called structural metadata, which describes the logical or physical relationships between the various parts of a compound object [8]. one of the biggest requirements in 3d documentation is the so called digital provenance, which puts the creation pipelines in relation with the digital objects, just as the physical provenance puts physical places in relation with physical artifacts. digital provenance is gaining increasing importance in cultural heritage research and practice since it deals with the uninterrupted chain linking the original to the processed outcome. the detailed documentation of this chain provides the necessary transparency and, from the scientific point of view, the repeatability and verifiability of the whole process. however this can occur only when the documentation is properly acquired. the provenance recording process should also be dense enough to document every step of a digital object‟s life in order to build a complete fingerprint for the preservation of the necessary referential integrity of the metadata. many new standards are also appearing on the stage for the encoding of provenance information. the crmdig seems to be one of ________________________________________________________________________________ geoinformatics ctu fce 2011 120 the most promising among them [9]. crmdig is an extension of the cidoc-crm ontology and was developed for the documentation purposes of the 3d-coform project [10]. it provides an event-centric model to capture the technical metadata typical of the data acquisition. crmdig provides a superclass “data acquisition event” and many subclasses to describe the various related sub-events and the entities involved in the process, e.g. the information concerning the acquisition tools (calibration, data formats), the actors participating in the acquisition event and the places where the acquisition event was performed. the model is intended to provide a flexible infrastructure to build provenance information in a very precise way. 3. digital containers for 3d information 3.1 the quest for a 3d digital repository along with the issues of data formats and metadata creation, there is another challenge to face in order to achieve real interoperability, a flexible exchange and a safe preservation of the 3d digital content. it concerns the nature of the available digital repositories, their inadequacy in dealing with 3d models and in guaranteeing affordable storage features, reliable and meaningful data retrieval and long-term preservation of digital artifacts and metadata. good results have recently been achieved by the open source community on the “repository” side: if the storage model based on the mpeg-21 “containers” for data and metadata recommended by the epoch project was lacking in flexibility, recent developments of new semantic-oriented paradigms make the storage/retrieval operations much more effective. we have surveyed many existing content and media management repositories to find a flexible and adaptive technology. at the end of our investigation we focused our research on the most interesting ones as of today: the 3dcoform repository infrastructure and the digital repository provided by fedora commons. the fedora digital repository provides a flexible digital content repository, which can be adapted to a wide variety of scenarios and can store any kind of digital content including images, videos, datasets and so on, together with a complex network of relationships linking the digital objects to each other. even if fedora can be used as a standalone repository service, its power is in its flexibility which makes it easy to integrate into an application or system that provides additional functions to satisfy particular user needs (e.g. a robust triple store or a fast and reliable query/retrieval framework) [11] . the 3d-coform distributed object repository is a digital repository developed to provide cultural heritage experts and practitioners with a working platform to access, use, share and modify digital content. it comes as an integrated repository to ingest, store and manage complex digital objects together with the related metadata, to enable efficient access and to export the information for reuse in other contexts. the 3d-coform repository also provides features to manage the digital object provenance information, descriptions and semantic classification of the modeled objects, including their physical location, their history, sources and expert annotations about modeling and related historical data [12]. we have chosen to base the core of our experimental system on the fedora architectural model instead of using the 3d-coform infrastructure, mainly because the latter, notwithstanding the good transparency kept towards the external services interacting with the core, implements an internal separation and decentralization between the object archive and the metadata archive in the core itself [13]. this architectural approach very often causes a lack of performances at the level of the core itself, i.e. in the very place where high performances are constantly required. the modular approach provided by fedora commons, with a very rigid but perfectly integrated “dual” core of data and metadata management, assisted by distributed services, offers a wide range of possibilities and can guarantee fast and affordable interaction between the digital objects and their metadata. we will make the connected services, developed on top of the fedora core, compatible with the 3d-coform repository infrastructure as soon as its architecture will become stable and the final version of the necessary 3dc communication apis will be released. for this purpose we are already implementing the same data models used by the 3d-coform project, in particular the cidoc-crm ontology for structural and descriptive metadata and the crmdig model (among the others) for the encoding of provenance metadata, in our system. 3.2 the fedora-based 3d repository the core of our repository, based on version 3.4 of the fedora digital archive, provides: a digital object repository to ingest, store, aggregate manage and extract digital objects coming from different institutions in different formats (images, videos, documents and other relevant files). a semantic resource index that provides the infrastructure for indexing the complex network of information regarding relationships between objects and components. we have extended this module to support the storage of the rdf representation of all the metadata used within the system (including cidoc-crm, mets and crmdig). ________________________________________________________________________________ geoinformatics ctu fce 2011 121 on top of the core we have implemented a set of services, both by extending some already provided by fedora itself and by developing ex novo the services required and not available in the fedora framework, using open source technology (figure 1). in particular, one of the main issues we‟ve had to face concerned the search/retrieval service: fedora actually provides only a basic query/retrieval mechanism based on textual search to look for digital objects and a very trivial bunch of functions to query metadata. for this reason we bound a solr framework to the fedor a core. solr is a scalable, totally open source and extremely powerful enterprise search platform which provides, among other features, a dynamic geospatial search, a strong integration framework and one of the most advanced faceted searches available today [14]. for this and many other reasons, solr has been chosen for the construction of the query framework. the other available services provided by the system are: a set of ingestion tools and technologies, for the preparation of sips, the standard packages for the ingestion of digital objects and metadata and for the appropriate archiving of all the digital and semantic information available. these tools also provide user interfaces to reduce user interaction and to guide the user through all the different phases of the metadata and uri creation to guarantee full internal compliance with the digital archive. a conversion framework, to create descriptions of the 3d models in collada, x3d, citygml (and if required, other open formats) and to make them available for online visualization and download. an enrichment mechanism, to combine existing metadata with new information regarding the same objects (e.g. geographic information, data coming from different thesauri, annotations and so on). the enrichment framework is also able to create “aggregated objects” by adding semantic and geographic information directly into the collada and x3d code. a content versioning mechanism, to track when a change is made on a certain object and by whom. every time a change occurs, a new version of the modified data is added to the object‟s metadata. this allows users to retrieve older versions of a data object by performing a “date and time” search, or to retrieve the “current version”, i.e. the one that is most up-to-date. a query framework able to interact with all kinds of metadata and the semantic relations between them for retrieval, conversion and redistribution of the 3d models and the related metadata information. as explained above, this framework is implemented by using the solr technology. fedora also provides a sparql endpoint which allows to query the semantic resource index directly. a set of plugins for blender and quantumgis which allows them to directly interact with the repository to download and re-ingest the digital content and to perform annotation and geographic data enrichment of the 3d models. figure 1: the structure of our 3d digital repository ________________________________________________________________________________ geoinformatics ctu fce 2011 122 3.3 ingesting operations the operations required by the system to archive, retrieve and manage the 3d digital content are straightforward. the preliminary operation to execute before storing 3d content is to collect all the available information in order to create rich sets of metadata. in an ideal scenario, the acquisition tools (i.e. laser scanner, digital cameras etc.) would be able to provide the necessary provenance metadata together with the 3d models they produce in a standard encoded format. a lot of effort has been put into resolving this issue within the 3d-coform project; but the direct production of standard metadata during the acquisition phase still remains a big challenge to overcome. anyway, it is always possible to retrieve and put this information into a standard format (i.e. cidoc-crm and crmdig) through various mapping operations, even if the mapping process is often slowed down by the multiples and different proprietary formats used by each acquisition tool. when both the digital and the information content is ready, a package is created to ingest it all into the repository. the ingestion stage is the most delicate of the entire process. normally we don‟t encounter particular problems when uploading the 3d model(s), even if encoded in proprietary formats, thanks to the abstraction of the container and to its capability to store any type of file it receives. but the metadata should be validated before ingestion, to debug the xml code from syntactic errors and to check the structural and semantic coherence towards the chosen schemas. if metadata pass all the necessary validity tests, a sip containing both the digital data and the metadata can be created and sent to the repository to be stored. at ingestion time, a set of new compound objects are created in accordance with the physical objects or monuments digitized with the acquisition operations; after ingestion, each 3d model and metadata set referring to a particular object or monument becomes a datastream of the related compound object. the system is able to generate on the fly the dublin core and cidoc-crm description of each compound object upon creation; the internal structural relationships inside the compound object are provided through the sips as well and encoded using the mets format. additionally each sip could contain the thumbnails of the 3d models, very useful when visualizing a preview of the object during the browsing and query/retrieval operations. all the metadata information will also be stored in the semantic resource index and used to extend the internal semantic network with the necessary descriptions of the new objects. the new information, once uploaded, will be immediately made available to all the other services. 3.4 search and retrieval there will be different ways to query the digital repository, in particular it will be possible to retrieve 3d models of a particular object or monument by querying the descriptive metadata or the information in the semantic network using the sparql endpoint provided by the system. refinements of the queries and advanced search criteria can be specified from the solr-based query interface or by using the faceted browsing facilities offered by the solr framework. as a result of the query operations, a set of 3d models (with the related thumbnail for preview) will be returned. the user can then perform the following operations: visualize all the available versions and the available formats of a given 3d object. load a chosen version of the object in a browser. this operation will be performed by the conversion framework which will create (where possible) an x3d representation of the selected model and will publish it through an html5 page created on the fly. download the original object or a standard encoded version of it (in collada, x3d or citygml) together with the related metadata, for personal use or for further processing and enrichment. get a google-compatible collada+kml representation of the object (if geographic information is available), suitable to be used in different scenarios (e.g. loaded in google sketchup, google earth/maps and other similar applications). 3.5 3d content enrichment the standard encoded versions of all the 3d models stored in our archive are ready to be enriched with geographic and structural information using the quantumgis plugin and the blender citygml markup plugins. the metadata provided by the system give the possibility to immediately upload the processed 3d models as an extension of the already existing compound objects without the need to recreate metadata from scratch. the new processing information will also enrich the provenance metadata of the existing objects to extend the internal semantic network. the processed 3d models, once re-ingested, will become new datastreams of the original compound object. the original 3d models will never be affected by any conversion or processing operation: they will continue to exist in their original format until a delete operation will be explicitly invoked. ________________________________________________________________________________ geoinformatics ctu fce 2011 123 4. testing the system 4.1 the uchi maius dataset the digital content we have used to test our repository comes from the digital archive of uchi maius, an archaeological excavation site located about 100 km south of tunis which includes roman and islamic remains. the archive comprises of the 3d model of the whole excavation area and many 3d models of single monuments. it was created by the university of pisa and the university of sassari (both in italy), which surveyed the area in 2002 by using a total station (leica tcr 307) and a digital camera with calibrated parameters (canon eos 400) [15]. we used all the information coming from these tools for creating the provenance metadata for our digital content. a good set of information regarding the acquisition process (calibration, resolution of the tools etc.) is also available. additional surveys and measurements were carried out to take notes of the most articulated details of the buildings and their apparatus and to acquire a very detailed set of spatial data describing the whole area in geographic terms; most of this information was used to test the data enrichment framework. we have prepared various sips, each one containing the digital content (i.e. the 3d models in .dxf format), the mets description of compound objects‟ structure to be recreated and the provenance metadata encoded using crmdig. afterward we validated and ingested the whole content into the repository. the sips creation was carried out by using a preliminary version of one of the ingestion tools that we are developing and that will provide (in its final version) all the basic functions for metadata creation and validation, for sips aggregation and upload into the central repository. the tool is currently in a very early stage of development and only a basic metadata creation mechanism (based on templates) and the upload service are fully working. the metadata validation has been performed manually by using the various xml plugins provided by the jedit editor. after the ingestion of 3d models and metadata, we tested the query/retrieval system in order to verify the performances of the solr framework and of the sparql endpoint. in both cases we got meaningful information on our digital objects and on the internal relationships between them. 4.2 data enrichment of the uchi maius data in a previous work we already described in technical terms the process of building a citygml representation of a 3d models and how the archaeologists can enrich the citygml code with cidoc-crm entities to insert semantic information into the model itself (e.g. historical information like the year of foundation or destruction of the city and so on) [16]. we have the process more efficient and the interaction of the blender plugin with the system is now straightforward. the new blender plugin is now able to download the citygml encoded version of one of our 3d models (generated directly by the repository), to import it into blender, to enrich it with cidoc-crm code and to reingest it back into its original context (figure 2): figure 2: the blender plugin in action on the uchi maius 3d data the same operations were performed with the quantumgis plugin, which is now able to work in a similar way on the same citygml code to create geographic information for a given 3d model and ingest them into the repository. the rich set of spatial information provided by the archaeologists for the uchi maius excavation site was used to enrich the collada representation of the 3d models with a lot of spatial data which made them suitable to be exported and used in a google earth/google maps context. ________________________________________________________________________________ geoinformatics ctu fce 2011 124 5. conclusions and further work this activities described in this paper are the prosecution of a very fruitful collaboration between pin, university of florence and the italian ministry for cultural heritage started in 2010 and using the uchi maius dataset kindly provided by the archaeologists of the department of history of the university of sassari [16,17]. the final goal of our effort is the definition of an open repository of 3d cultural heritage models, providing standard mechanisms for preservation, updating, and dissemination. even if we are still at the very beginning of the development activity, the preliminary results seems very encouraging mainly because of the maturity of the open source technology we are using and the affordability of the available standards, able today to provide a true and solid integration environment, which was impossible to imagine in the past. notwithstanding this optimistic view, a lot of work remains to be done and most of it concerns the adaptation of the sophisticated technology we are dealing with to the everyday needs of the cultural heritage people, usually very sensible to the interoperability issues, but rarely willing to sacrifice the simplicity and usability of the tools for their achievement. a lot of development is still needed also at the repository level: even if most of the integration problems can be solved today with the tools we have already implemented, other issues still remain: detailed and high quality metadata are required to achieve the scientific authentication of the 3d artifacts and various issues of digital preservation remain open. fedora and other similar digital object repositories have always kept an eye on preservation, but a more active management of the life cycle of digital resources, from data creation and management to data use and rights management, needs to be implemented in order to reduce the risk of losing control over the digital content and compromise their survivability. in the future we will focus our effort on the development of more clear and user-friendly interfaces and on good quality documentation on how to use them. the stability of the core and the query/retrieval service make us free to concentrate our work on the improvement of the other services, in particular on the conversion and enrichment mechanisms. the system is already able to host standard-encoded thesauri (in skos) and gazetteers. future development will extend the enrichment framework to take advantage of this kind of knowledge as well. we are also planning to implement an oai-pmh repository to publish information concerning the 3d digital objects stored in our archive. the improvement of the existing plugins and the development of new ones to extend the functions of our system will be among the priorities of our future development activity. 6. references [1] koller, d., frischer, b., humphreys, g.: research challenges for digital archives of 3d cultural heritage models, acm j. comput. cult. herit. 2, 3, december 2009, article 7. [2] 3d pdf technology, http://www.adobe.com/manufacturing/solutions/3d_solutions/. [3] collada: digital asset and fx exchange schema, http://www.collada.org. [4] citygml: a common information model for the representation of 3d urban objects, www.citygml.org. [5] x3d: open standards for real-time 3d communication. http://www.web3d.org/x3d/. [6] the london charter initiative, www.londoncharter.org, june 2006. [7] crofts, n., doerr, m., gill, t., stead, s., stiff, m.: definition of the cidoc conceptual reference model. tech. rep., http://www.cidoc-crm.org/docs/cidoc_crm_version_5.0.1_mar09.pdf, march 2009. [8] metadata encoding and transmission standard (mets): http://www.loc.gov/standards/mets/. [9] pitzalis, d., niccolucci, f., theodoridou, m., doerr, m.: lido and crmdig from a 3d cultural heritage documentation perspective, proceedings of vast 2010, paris, september 2010, 87-95. [10] 3d-coform: tools and expertise for 3d collection formation, http://www.3d-coform.eu. [11] fedora repository project: general purpose, open source digital object repository system, www.fedoracommons.org. [12] doerr, m., tzompanaki, k., theodoridou, m., georgis, c., axaridou, a., havemann, s.: a repository for 3d model production and interpretation in culture and beyond, proceedings of vast 2010, paris, september 2010, 97-104. [13] pan, x., beckmann, ph., havemann, s., tzompanaki, k., doerr, m., fellner, d.w.: a distributed object repository for cultural heritage, proceedings of vast 2010, paris, september 2010, 105-114. [14] solr open source enterprise search platform from the apache lucene project, http://lucene.apache.org/solr/. [15] lorenzini, m.: dati e conoscenza archeologica: il citygml per il 3d del foro di uchi maius in tunisia, graduation thesis, università degli studi di pisa, 200ř. [16] felicetti, a., lorenzini, m., niccolucci, f.: semantic enrichment of geographic data and 3d models for the management of archaeological features, proceedings of vast 2010, paris, september 2010, 115-122. [17] felicetti, a., lorenzini, m.: open source and open standards for using integrated geographic data on the web, proceedings of vast 2007, brighton, november 2007, 63-70. http://www.adobe.com/manufacturing/solutions/3d_solutions/ http://www.collada.org/ http://www.citygml.org/ http://www.web3d.org/x3d/ http://www.londoncharter.org/ http://www.cidoc-crm.org/docs/cidoc_crm_version_5.0.1_mar09.pdf http://www.loc.gov/standards/mets/ http://www.3d-coform.eu/ http://www.fedora-commons.org/ http://www.fedora-commons.org/ http://lucene.apache.org/solr/ _______________________________________________________________________________________ geoinformatics ctu fce 2011 220 using unusual technologies combination for madonna statue replication bronislav koska czech technical university in prague, faculty of civil engineering thákurova 7, prague 6, czech republic bronislav.koska@fsv.cvut.cz keywords: laser scanning, computer tomography, ct, rapid prototyping, milling, nurbs abstract: an unusual combination of some modern technologies was used to create precise copy of a medieval lime madonna with polychromy and the process is described in the paper. the used and combined technologies were computer tomography, 3d scanning and modeling with textures, rapid prototyping and 3d milling. these modern technologies are standardly used in relic documentation and restoration, but their combination in a synergic procedure is unique. the main reason for using them was owner dema nd for minimizing working time with the original statue that he needed it for exhibition. the time needed for working with the original statue was in the case about 20 percent of standard time and overall time needed for creation of copy was in the case about 40 percent of standard time. 1. introduction madonna from rouchovany was created about 1330. it represents an important medieval relic and it is part of the national gallery archive. it is also the most significant relic of the city rouchovany and that‟s why its councilmen decided to have its copy created and expose it to its original church. the national gallery requested to release the statue from the exhibition for the shortest time possible and that‟s why the 3d scanning technologies and following were incorporated in the project. the statue shape is pretty complex from the 3d scanning point of view. especially hollow back part presents an immense challenge see figure 1. the problem of measuring for standard 3d scanner unreachable parts was solved by using computer tomography technology, which was also used for restoring exploration. 3d model was milled on 3d mill from lime wood to speed up copy creation process. 3d colour copy was also rapid prototyped in the 50 percent scale to help restorer with sculpting of details. the only part which had to be done side by side to the original statue was polychromy and patina coating. 2. computer tomgraphy the first applied technology is computer tomography (further called ct), which was used for two purposes. one of the purposes was restoring exploration of the statue condition and the second purpose was 3d measuring of surfaces unreachable by other technologies (hollow back part, long and narrow slots). the measurement was realised in the university hospital motol by mudr. j. lisý during spare night hours on siemens ct. the output of ct measuring was series of slides with resolution 512x512 (see figure 2) and mutual distances one millimetre. the pixel size is about 1 mm (0.9765...) too. ct working box with defined accuracy is 500x500xheight millimetre what is slightly smaller than madonna size. that is why it had to be scanned in two series. we got scanned data together with the freeware ct viewer xvision, which can be also used to export slides to the standard ct format called “dicom” in version 3.0. _______________________________________________________________________________________ geoinformatics ctu fce 2011 221 figure 1: madonna from rouchovany, photograph and 3d model there is a range of freeware/open source software for working with “dicom” image series available on the internet. we needed creation of triangulated surface (further called mesh) from the dicom series. the open source software devide [1, 2] seems to be suitable for the purpose. it has modular architecture, enables creation of schematic macro for automatic processing and it has a possibility to export 3d model to standard mesh format “stl”. figure 2: example horizontal slide from ct in xvision software _______________________________________________________________________________________ geoinformatics ctu fce 2011 222 figure 3: model exported from the devide software and model after post processing in geomagic studio the model created and exported from the devide software was pretty angular and was improved by post processing in the geomagic studio 11 software see figure 3. it seems clear that 3d model based on the ct data isn‟t sufficient for a copy creation and supplement method had to be used. 3. 3d scanning – lors as was stated before, the 3d model based on ct data was not sufficiently detailed. it was decided to do supplementing scanning using standard 3d scanner. because of the statue size and instrument availability the self-developed laser scanning system lors [3] was used. the scanning takes two working days and was realized during closing days of national gallery exhibition see figure 4. 71 scan positions were accomplished without using control points. the scanning in each of six height level was realized in twelve position mutually rotated 30 degrees. the scanning resolution was set about 0.5 mm for the upper half part of the statue and about 1 mm for the lower part. all together about 40 millions points were measured. the main parameter of the scanning system lors are stated in the table 1. 3.1 modelling registration was done sequentially using manual and global registration process in geomagic studio software. at first, each height level was registered independently. then, all point clouds were reduced to the lower resolution 1x1 mm, because of too large amount of points to be worked with in the geomagic studio software. at the end of the registration process all height levels were globally registered together. there was about eight millions points and the global registration standard deviation from all 71 scan position was 0.8 millimetre. meshing was realized in the open source software vrippack [4]. surprisingly, this old opensource software give better results than geomagic studio in the case user has enough information about used scanning system and enough patience to learn how to control it. there was used voxel size 0.5 mm in the vrippack software. the 3d model covered more than 95 percent of statue surface, excluding unreachable hollow back part, which does not carry significant spatial or chromatic information. surface missing in the data from the lors scanning system was filled in using the ct mesh data see figure 5. the final model is presented on the figure 1 and a detail on the figure 6. _______________________________________________________________________________________ geoinformatics ctu fce 2011 223 figure 4: scanning with the laser scanning system lors parameter lors working range (width x height x depth) [meter] 1.3 x 0.9 x 1.0 maximal resolution (horizontally x vertically) 1950 x 2304 laser beam width [mm] 1 accuracy [mm] 0.5 scanning rate 4.5 millions / 32 minutes size (width x height x depth) [meter] 1 x 0.2 x 0.2 weight [kg] 2 table 1: parameters of the laser scanning system lors figure 5: surfaces filled in from the ct data (grey) to the lors data (blue) in unreachable areas for lors _______________________________________________________________________________________ geoinformatics ctu fce 2011 224 figure 6: photograph and the model of a detail (front view of left shoulder) 4. rapid prototyping 3d colour copy was rapid prototyped in the 50 percent scale to help restorer with sculpting of details during the period he had not the original statue at disposal. printing scale 50 percent was chosen because of the statue size and the parameters of used 3d printer zprinter 450 which is located in the laboratory of photogrammetry (faculty of civil engineering, ctu prague) [5]. statue wall-body was filled with empty shapes because of the expensiveness of printing material see figure 7. 4.1 model partition the used zprinter 450 has printing box size 203x254x203 millimeters [5] and that‟s why the model had to be divided into five parts. surface of the parts had to be closed and supplemented with connection elements, which were realized by column shape holes with diameter 3 and depth 10 millimeters see figure 7 on the left. figure 7: two parts prepared for rapid prototyping (cuts, hollow spaces, connection parts) _______________________________________________________________________________________ geoinformatics ctu fce 2011 225 4.2 texture mapping texture was also applied to the model for better orientation of the restorer and chromatic information presentation. a self-developed software was used for texture mapping, because the inner and outer orientation for texture images was precisely known from the scanning system lors. the software is called precise texture mapping [6] and enables some basic texture-balancing see figure 7 on the right. the final printed model can be seen on the figure 8. figure 8: 3d printer result 4. 3d milling it was decided to use 3d milling cutter – cnc to speed up realization of the copy. file formats for description of socalled free-form shapes are based on nurbs (non-uniform rational b-spline) representation in the field of mechanical engineering. so far, in the project used formats were based on triangulated irregular networks also called mesh. the character of two mentioned formats (mesh and nurbs) is fundamentally different and therefore there is no possibility of fully automatic conversion. a nurbs model does not seem to preserve so much detail as a mesh model at least during automatic conversion. the mesh model was enlarged 1 millimeter in the normal direction to keep rather more material than less in details. valid nurbs model was created using automatic process in the geomagic studio software and was manually edited. the nurbs model quality was checked using 3d compare tool in the geomagic studio. there were only very small and rather complicated and angular areas where the nurbs model was smaller than the mesh model see yellow areas on the figure 9. 5. result only the last replication step had to be realized using the original statue at the client place. it represents final surface brushing up, polychromy and patina coating. it took only a few weeks instead of few months usually needed. result is displayed on the figure 11. 6. summary the paper describes a unique procedure of medieval madonna copy creation. the main advantages of suggested procedure is significantly shorter time needed for working with the original statue (in the case about 20 percent of standard time) and considerably shorter overall time for creation of copy (in the case 30-40 percent of standard time). the procedure incorporates a range of relatively modern technologies and their synergic combination. it includes computer tomography, 3d scanning and modeling with textures, rapid prototyping and 3d milling. the realized project proofed applicability of the procedure and shown its advantages, but also revealed its most problematic parts. _______________________________________________________________________________________ geoinformatics ctu fce 2011 226 figure 9: differences between nurbs and mesh model realization of the statue copy using 3d milling cutter is shown on the figure 10. figure 10: 3d milling cutting – cnc 7. acknowledgements this research has been supported by grant sgs11/046/ohk1/1t/11. restorer refined the cut model according to rapid prototyped model in his studio. _______________________________________________________________________________________ geoinformatics ctu fce 2011 227 figure 11: side by side photograph of the copy and the original statue 8. references [1] tu delft graphics group [online]. 2008 [cit. 2011-06-05]. project devide. available on www: . [2] pavelka, k., dolanský, t.: using of non-expensive 3d scanning instruments for cultural heritage documentation. cipa symposium wg6. antalya, isprs, 2003, vol. 1, p. 158-162. issn 1682-1777. [3] botha, c. p., post, f. h.: hybrid scheduling in the devide dataflow visualisation environment, proceedings of simulation and visualization (h. hauser, s. strassburger, and h. theisel, eds.) , pp. 309–322, scs publishing house erlangen, february 2008. [4] koska, b., pospíšil, j., štroner, m.: innovations in the development of laser and optic rotating scanner lors, xxiii international fig congress, munich, germany, isbn 87-90907-52-3, 2006. available on www: < http://k154.fsv.cvut.cz/~koska/publikace/publikace_en.php> [5] curless, b., levoy, m.: a volumetric method for building complex models from range images, siggraph `96, new orleans, la, 4-9 august 1996. available online: [6] zcorporation [online]. [cit. 2011-06-05]. zprinter 450. available online: < http://www.zcorp.com/en/products/3d-printers/zprinter-450/spage.aspx>. [7] koska, b.: precise texture mapping. [online]. 2010 [cit. 2011-04-30]. available online: . [8] pavelka, k. tezníček, j. low-cost culture heritage documentation and preservation using optical correlation scanner (ocs). proceedings of 22th cipa symposium [cd-rom]. kyoto, isprs, 2009, vol. 1, p. 125-132. [9] pavelka,k.: using of non-expensive 3d scanning instruments for cultural heritage documentation, cipa symposium wg6, korfu, 2002, pp-59-68 [10] koska, b. pospíšil, j. štroner, m. the result presentation of the development of laser and optic rotating scanner lors and introduction of public library of classes and functions spatfig . optical 3-d measurement techniques vii, volume i. vienna: tu vienna, 2005, vol. 1, p. 63-73. isbn 3-9501492-2-8. http://graphics.tudelft.nl/projects/devide http://k154.fsv.cvut.cz/~koska/publikace/publikace_en.php http://www.cs.washington.edu/homes/curless/ http://www.zcorp.com/en/products/3d-printers/zprinter-450/spage.aspx http://k154.fsv.cvut.cz/~koska/software/ptm/ptm.php analysis of the variations of measured values in continuous long-term geodetic monitoring jan vaněček department of special geodesy, faculty of the civil engineering czech technical university in prague thákurova 7, 166 29 prague 6, czech republic jan.vanecek.2@fsv.cvut.cz abstract a geodetic measurement of shifts and deformations by total station is a well-known and widely used method. there is presented the analysis of the variations over time of the measured values in continuous geodetic monitoring in this paper. there are used measured data from a specific monitoring system of a surface mine in the time period from january 2006 to july 2010 in the analysis. the aim of the analysis is to describe linear trend and periodic changes in measured data (horizontal direction, zenith angle and slope distance). the main method of the analysis is a linear-harmonic function approximation. keywords: measurement of shifts, least squares adjustments, linear and harmonic approximation, total station, continuous monitoring. introduction special branch of engineering geodesy is a measurement of shifts and deformations, which includes a special case – continuous monitoring. measuring of the shift is regulated in standards, e.g.in the czech republic csn 730405, and also described in academic textbooks (e.g. [10]). this branch of geodesy is dedicated to measure the objects periodically over time and includes many applications, e.g. monitoring of bridges (e.g. [5]), tunnels (e.g. [3]) or historical buildings [8]) and vertical ground deformation measuring (e.g. [1]) or even sub-millimetre level measuring of structures (e.g. [2]). the measurements are usually realized by the terrestrial methods (e.g. [9]), by the global navigation satellite systems methods or sometimes is advantageously used complex monitoring systems consist of geodetic and geotechnical instruments [6]). a special case is the continuous monitoring, where the measuring machine records the measured values 24 hours a day, 7 days a week and 365 days in the year. this procedure is used for monitoring of the specific structures or other objects, which creates a large risk of damage to lives and property. an example might be the monitoring of the slopes, which are vulnerable to landslide (e.g. [4]). there is useful to know in the processing and evaluation of the obtained results, whether the measured value does not change in time in some way other than by the shifts of the target points. it means that the measured values in time grow linearly, drop linearly or their values periodically change. geoinformatics fce ctu 15(1), 2016, doi:10.14311/gi.15.1.2 11 j. vaněček: analysis of the variations of measured values the aim of this article is to describe the analysis of the measured values of the total station, which is used for monitoring of the stability of the krušné mountains1 slopes. description of the monitoring system the data used for the analysis of the development of the measurements was obtained from severní energetická a. s. this company owns the surface mine čsa, where is installed a monitoring system observing landslides in the krušné mountains. figure 1: total station in the protective shelter. the monitoring system consists of several elements, the main element is the automatic total station leica tcra2003. other elements of the monitoring system are observed points situated on the slopes of the krušné mountains, reference points located on a technical objects on the opposite side of the mine and a management center. reference points are located in stable area and no shifts are not expected. the total station is situated in solid soil in the middle of the mine in a protective metal shelter (see in fig. 1) with windows made from ordinary glass. there is also heating and air conditioning for preservation optimal temperature in the shelter. schematic situation of the total station and reference points is in fig. 2. measurement of the observed and reference points is carried out in a time period of an hour. the measurement is repeated 365 days a year with the exception of service outages and periods when visibility is not sufficient. the data sent to the management center are values measured by the total station (horizontal direction, zenith angle and slope distance) and also outside temperature and atmospheric 1krušné mountains – mountains forming west boundary of the czech republic with germany, more: https://en.wikipedia.org/wiki/ore_mountains geoinformatics fce ctu 15(1), 2016 12 j. vaněček: analysis of the variations of measured values figure 2: schematic sketch of the situation of the instrument and ref. points. pressure. shifts of observed points are calculated immediately for each round of measurements and relevant employees of the company are informed by a sms in the case of exceeding the limit values of shifts. there is a detailed description of establishment of the monitoring system in [7]. the measured values used in an analysis described below cover time period from the beginning of the year 2006 to july 2010. mathematical basis of the analysis of the variations of measured values different methods (regression analysis, correlation analysis, discrete fourier transform, etc.) were tested for the analysis of the variations of measured values, at the beginning. but at the end an approximation of measured values by a specific curve (l-h function) was selected as the best way. l-h function contains the linear and harmonic part and its equation is the following: y = a + b · x + c · sin � x · 2π t + d � , (1) where a is an absolute coefficient, b is a linear coefficient (slope of the regression), c is an amplitude of the harmonics part, d is a phase shift and t is a period. the reason of the choice of this l-h function was to indicate the linear trend and periodic changes in measured values concurrently. there is a basic assumption that the harmonic changes have annual period depending on air temperature and other weather conditions variations. the coefficients of l-h function were calculated by the method of least squares. daily averages of measured values were used in the calculation for a purpose of better stability of the calculation. this way of the calculation leads to an elimination of daily variations of geoinformatics fce ctu 15(1), 2016 13 j. vaněček: analysis of the variations of measured values measured values. approximate unknowns were determined experimentally and to all daily averages were assigned the same weights. after first approximations of daily averages of the horizontal direction the standard procedure of the calculation described above was changed. one more harmonic member was added to the equation 1. so the final equation is then: y = a + b · x + c · sin � x · 2π t1 + d � + e · cos � x · 2π t2 + f � , (2) where e is an amplitude of the second harmonics part, f is a phase shift and t2 is a period of the second harmonics part.the calculation of an estimation of unknown coefficients is done in several steps, in which are unknown coefficients estimated in groups or independently. there are also applied weights of measurements, which are derived from calculated standard deviations of measurements in a previous step of the calculation. the least squares method was used for the estimation of the unknown coefficients as a basic calculation method, too. the calculated linear coefficient of l-h function was tested by tests of statistical hypotheses. the tested hypothesis is none linear trend in measured data, namely whether the calculated coefficient b corresponds to the expected values of θ. in this case, the premise is parameter b has value θ = 0. null hypothesis will be of the form: h0 : b = θ ⇒ b = 0 . (3) testing of statistical hypotheses can be done on the base of two test criteria. in the first case, the tested value f has the fisher distribution of probability with (n − k) degrees of freedom f = (b − θ)2 σ2b . (4) in the second case, the tested value t has the student t-distribution of probability with (n − k) degrees of freedom t = (b − θ) σb . (5) critical values for the fα and tα/2 is determined from tables or calculation for two-sided test at significance level α. we will reject the null hypothesis if f > fα or |t | > tα/2. finally, a comparison was made of daily averages of measurements and their l-h function and l-h curve approximation of air temperature, which was determined by the same procedure as the l-h function of measured values according to 1. the premise of this comparison is a dependency of measured values on the changes of average air temperature during the year. the comparison was done only graphically. the analysis was carried out for the reference points no. 91, 92, 93, 94 and 95 for the time period from january 2006 to july 2010. these points are all reference points which were or are measured by total station, observed points were not included in the analysis becouse of their possible movements. geoinformatics fce ctu 15(1), 2016 14 j. vaněček: analysis of the variations of measured values analysis of the air temperature variations a part of the monitoring system described above is a sensor for a measuring air temperature and air pressure. the measured values by this sensor are saved for each record of the measurements together with value of horizontal direction, zenith angle and slope distance. values of the recorded air temperature for measurements to reference point 93 were used in the analysis of the air temperature variations. the reason of using point 93 is the highest number of the measurements. analysis of the air temperature is based on an approximation of daily averages of air temperature by l-h function according the formula 1, because there is a presumption of only one significant annual period in measured temperature. figure 3: aproximation curve of air temperature, reference point no. 93. the result of the approximation can be seen in fig. 3, the values of the significant coefficients of l-h function are listed in table 1, where are listed calculated linear coefficient, amplitude and period. as it follows from fig. 3 and it was expected, temperature has a periodic progress with annual period and amplitude of 10 ◦c. lower calculated amplitude is probably given by the daily averages (at night is cooler). approximation of daily averages of air temperature gives us value of linear trend −15 ◦c per year. this slight decrease is visible in fig. 3, but no specific conclusion can be done because of it is a short period of time for temperature. it is therefore appropriate to take this geoinformatics fce ctu 15(1), 2016 15 j. vaněček: analysis of the variations of measured values table 1: some calculated coefficients l-h function of air tempperature for point no. 93 point coefficient value coefficient value 93 b [ ◦c day ] -4.16e-04 c [◦c] 9.91 σb [ ◦c day ] 2.2e-08 σc [◦c] 1e-05 b [ ◦c year ] -0.15 t1 [day] 366.4 σb [ ◦c year ] 8e-06 σt1 [day] 5e-05 linear trend of temperature only as informative. the linear trend was not tested by tests of statistical hypotheses (3). analysis of the horizontal direction variations the analysis of the horizontal direction was done according to the equation (2), which was used for an approximation of the data due to the large variance of measured data and the assumption of two important periods, which have been identified in the graphs of the measured values. input data of the analysis were daily averages of measurements of the horizontal direction. figure 4: approximation curve of horizontal direction, ref. point no. 91. another specific feature of the analysis of the horizontal direction is the need to separate the measured data to the time intervals in which is the same zero direction setting of the horizontal circle. these intervals have been created on the basis of knowledge of service outages and other discontinuities in the data. finally, the most important 5 intervals were geoinformatics fce ctu 15(1), 2016 16 j. vaněček: analysis of the variations of measured values figure 5: approximation curve of horizontal direction, ref. point no. 92. figure 6: approximation curve of horizontal direction, ref. point no. 93. geoinformatics fce ctu 15(1), 2016 17 j. vaněček: analysis of the variations of measured values table 2: selected time intervals interval number beginning of interval end of interval number of days 1 2. 1. 2006 26. 9. 2006 267.0 2 29. 9. 2006 30. 11. 2007 10:00 427.,4 3 28. 1. 2008 23. 10 .2008 269.0 4 27. 10. 2008 16. 9. 2009 12:00 324.5 5 21. 10. 2009 11:00 2. 5. 2010 192.5 selected, which are listed in the table 2. the results of the approximations for the reference points no. 91, 92 and 93 are graphically represented in fig. 4, 5 and 6. there are approximation curves for all tested intervals in the charts. the values of linear trend are listed in table 3 and table 4 shows the calculated size of the time periods for the time intervals. the last table 5 contains the results of the tests of statistical hypotheses about the linear trend. table 3: values of linear trend of horizontal direction ref. point coefficient time interval number1 2 3 4 5 91 b [ mgon month ] 0.99 0.25 -1.35 -1.04 -0.90 σb [ mgonmonth ] 0.02 0.04 0.02 0.01 0.04 92 b [ mgon month ] 0.85 0.28 -1.31 -1.15 -0.88 σb [ mgonmonth ] 0.02 0.02 0.02 0.02 0.04 93 b [ mgon month ] 1.19 0.27 -1.13 -0.85 -0.92 σb [ mgonmonth ] 0.02 0.01 0.02 0.01 0.03 94 b [ mgon month ] 0.97 0.07 σb [ mgonměsic ] 0.02 0.06 95 b [ mgon month ] 0.85 -1.27 -1.00 -0.86 σb [ mgonmonth ] 0.04 0.02 0.01 0.03 the results listed in table 3 show, that linear coefficient is positive in the first two time intervals and negative in the other intervals. the reason for this is not clear, but probably the change was caused by an adjusting the fixing of the total station or a reparation of the instrument. the sizes of the linear changes are around 1 mgon/month in all time intervals. this value is high and definitely negligible. tests of statistical hypotheses have been made on the significance level α = 5 % in the both ways of calculation (4) and (5). all results of the tests except the 2nd time interval for the point 94 are the same, the null hypothesis is rejected. based on these results and tests carried out, we can declare a linear trend in the data found for proven. sizes of calculated linear coefficients are similar in comparing the calculated values for various reference points in the same time interval, therefore it can be assumed, that the calculated horizontal angles are deprived of this influence of linear trend. the cause of the linear trend of the measured data is most likely to be found in its way of measuring the total station, when the instrument measure only in the first face of the geoinformatics fce ctu 15(1), 2016 18 j. vaněček: analysis of the variations of measured values table 4: values of calculated period for horizontal direction ref. point coeffiecient number of time interval1 2 3 4 5 91 t1 [day] 163 323 123 152 119 σt1 [day] 3 22 3 2 4 t2 [day] 84 137 67 72 62 σt2 [day] 4 3 11 1 1 92 t1 [day] 185 324 130 158 106 σt1 [day] 3 10 3 2 4 t2 [day] 85 134 49 73 60 σt2 [day] 3 2 3 1 1 93 t1 [day] 164 352 126 145 126 σt1 [day] 3 14 3 1 3 t2 [day] 82 126 62 73 61 σt2 [day] 7 1 2 1 1 94 t1 [day] 172 84 σt1 [day] 3 3 t2 [day] 86 19 σt2 [day] 5 0.5 95 t1 [day] 148 120 148 124 σt1 [day] 5 2 2 3 t2 [day] 54 66 72 61 σt2 [day] 8 7 1 1 telescope. so the total station rotates in a clockwise direction only. it is therefore possible that this rotary motion there is a gradual tightening or loosening of total station. the table 4 contains the calculated periods. it can confirm that the measured values of the horizontal direction change for all reference points in the same way. this conclusion is confirmed by the graphs in fig. 4, 5 and 6. it is impossible to detect periodic phenomenon in the measured data, because any specific period does not repeat more than once. so, we must assume that the measured data of the horizontal direction do not contain any periodic changes that we could guess from these time intervals. the next step of the analysis of the horizontal direction is the comparison of the development of the horizontal direction and the air temperature. the easiest way to compare these two variables is the comparison of the approximation curves in one graph. this procedure will create a graph that is in fig. 7, where is not evident a clear dependency measured values of the horizontal direction and air temperature. still, it can be assumed that all or most of the relevant variables affecting the measured value of the horizontal direction is also in some way dependent on the temperature (refraction, the curvature of the glass in windows of the shelter, etc.). this comparison was made only for measurements to reference point no. 93, because measured values of the horizontal direction to other reference points are considerably similar. geoinformatics fce ctu 15(1), 2016 19 j. vaněček: analysis of the variations of measured values table 5: results of statistical hypothesis ref. point coeffiecient number of time interval1 2 3 4 5 91 f 2010 47 6257 5332 633 fα=0,5 4 4 4 4 4 t 45 6,9 -79 -73 -25 tα=0,25 2 2 2 2 2 n� 250 410 261 311 157 92 f 1598 182 3657 4962 570 fα=0,5 4 4 4 4 4 t 40 14 -60 -70 -24 tα=0,25 2 2 2 2 2 n� 249 411 260 313 172 93 f 3133 433 4638 3727 788 fα=0,5 4 4 4 4 4 t 56 21 -68 -61 -28 tα=0,25 2 2 2 2 2 n� 253 410 261 314 177 94 f 2131 1 fα=0,5 4 4 t 46 1 tα=0,25 2 2 n� 250 127 95 f 548 5625 4604 742 fα=0,5 4 4 4 4 t 23 -75 -68 -27 tα=0,25 2 2 2 2 n� 203 261 315 165 the conclusion of the analysis of the horizontal direction is clear, measured values of the horizontal direction include many random and systematic errors, which cause a large variance of measured values. these errors are caused with the highest probability by a combination of effects of temperature, insolation and the construction of the protective shelter. the total station measures over ordinary glass, which is set into the metal walls of the protective shelter. due to an influence of insolation and temperature windows of the shelter change their optical behavior. the results of the approximations of l-h features show that the measured data contain linear trend of a considerable size, on average, 1 mgon/month. the cause of this trend is probably in the technology of measurement, when the instrument measures only in the first face of a telescope. man cannot draw a clear conclusion about periods obtained by this analysis in the measured values, because the period was not found, which occurs more often at different time intervals. from the comparison of temperature and the values of the horizontal direction is clear that geoinformatics fce ctu 15(1), 2016 20 j. vaněček: analysis of the variations of measured values figure 7: comparison of approx. curves of horizontal direction and temperature, ref. point no. 93. values of the horizontal direction do not correspond to the temperature hardly over. so, there is not a clear dependency between these variables. the analysis of the horizontal direction but also brought a positive conclusion, too. changes in the measured data to different reference points are similar and therefore the horizontal angle should be stripped of their influence. analysis of the variations of the horizontal angle as mentioned in the previous part, the calculation of horizontal angles from measured values of the horizontal direction should remove most adverse errors in measured values. so, the variance of the calculated values of the horizontal angle should not achieve such values as for the horizontal directions. another advantage is the possibility to calculate an approximation of all the measurements from january 2006 to july 2010, therefore it is not necessary to divide the measurements to intervals as in the case of the horizontal direction. the analysis was performed daily averages approximation of horizontal angles by l-h function with one harmonic member (1). similarly as in the analysis of the horizontal direction the calculated linear coefficients were tested the tests of statistical hypotheses, whether they are equal to zero. the comparison of values of the horizontal angles and values of air temperature was made at the end of the analysis. measurements to reference point no. 94 were not used in the analysis, because it was measured geoinformatics fce ctu 15(1), 2016 21 j. vaněček: analysis of the variations of measured values for a short time to this point. from measurements to other reference points were calculated the angles (all combinations) ω93−91, ω93−92, ω92−91, ω95−93, ω95−92 and ω95−91. in this case was deliberately violated the conventions of marking horizontal angles in geodesy, as the first point is marked the right arm angle. the schematic situation of the total station and reference points is in fig. 2. graphic presentation of the results of the approximation can be seen in the following figures 8 and 9, where is well seen, that the calculation of the horizontal angle remove undesirable influences and the progress of the measured values is periodic. the variance of the values is significantly smaller than the variance of the horizontal direction. a size of the variance is from 0.5 to 1 mgon for daily averages. it is also apparent from the figures that the data contain a small linear trend. table 6: significant coefficients of l-h function calculated in approximation of horizontal angles angle coefficient value coefficient value ω9291 t1 [day] 373.7 b [ gonday ] 5.7e-07 σt1 [day] 1.5 σb [ gon day ] 1e-08 c [mgon] 0.32 b [ mgon year ] 0.205 σc [mgon] 0.01 σb [ mgonyear ] 0.005 ω9391 t1 [day] 362.6 b [ gonday ] -3.3e-07 σt1 [day] 0.8 σb [ gon day ] 1e-08 c [mgon] 0.63 b [ mgon year ] -0.120 σc [mgon] 0.01 σb [ mgonyear ] 0.005 ω9392 t1 [day] 366.2 b [ gonday ] -9.0e-07 σt1 [day] 0.5 σb [ gon day ] 2e-08 c [mgon] 1.01 b [ mgon year ] -0.326 σc [mgon] 0.01 σb [ mgonyear ] 0.006 ω9591 t1 [day] 359.8 b [ gonday ] -2.4e-07 σt1 [day] 3.0 σb [ gon day ] 1e-08 c [mgon] 0.15 b [ mgon year ] -0.086 σc [mgon] 0.01 σb [ mgonyear ] 0.005 ω9592 t1 [day] 381.7 b [ gonday ] -1.2e-06 σt1 [day] 1.5 σb [ gon day ] 2e-08 c [mgon] 0.50 b [ mgon year ] -0.435 σc [mgon] 0.01 σb [ mgonyear ] 0.007 ω9593 t1 [day] 369.8 b [ gonday ] -6.0e-07 σt1 [day] 1.0 σb [ gon day ] 1e-08 c [mgon] 0.43 b [ mgon year ] -0.218 σc [mgon] 0.01 σb [ mgonyear ] 0.005 the results in table 6 show that the values of the horizontal angle oscillate approximately with annual period and have average amplitude of 0.5 mgon. it can be assumed that this periodic progress with an annual period is caused by periodic changes of air temperature geoinformatics fce ctu 15(1), 2016 22 j. vaněček: analysis of the variations of measured values figure 8: approximation curve of the horizontal angle between ref. points no. 93 and 91. figure 9: approximation curve of the horizontal angle between ref. points no. 92 and 91. geoinformatics fce ctu 15(1), 2016 23 j. vaněček: analysis of the variations of measured values table 7: results of the tests of statistical hypotheses of linear trend of horizontal angle. angle f fα=0.5 t tα=0.25 n� ω9291 301047 4 549 2 1575 ω9391 24002 4 -352 2 1577 ω9392 110396 4 -332 2 1595 ω9591 309314 4 -556 2 1100 ω9592 258677 4 -509 2 1119 ω9593 81487 4 -285 2 1126 during the year. comparison of temperature and the values of the horizontal angle will be described below. the periodic progress of the values of the horizontal angle can be assumed for proven. the size of the calculated linear coefficients for reference points is an average 0.2 mgon/year, and in addition the angle ω92−91 has a negative sign. tests of statistical hypothesis were made in the same way as in the case of analysis of the horizontal direction of the tested hypothesis about zero directive of the regression line. the null hypothesis is rejected at all angles and thus a linear trend was proven. the numeric value of the results of the tests of statistical hypotheses are listed in table 7. despite the rejected of the null hypothesis the linear trend can be neglected on a base of its size and the required accuracy of measurement for this particular situation. furthermore, a next comparison was made of the calculated horizontal angles and their approximation curves with the temperature curve in the same way as in the previous chapter. if the approximation curve of the air temperature is drawn in the one figure with the approximation curve of the horizontal angle it is created a graph suitable for comparison of these two variables. this graph is in the fig. 10, which is a comparison of the approximation curve of the air temperature and horizontal angle ω93−91 . it is clear from the figure that the size of the horizontal angle changes periodically depending on the air temperature. specifically, this angle ω93−91 increases with increasing temperature and decreases with decreasing temperature. a very similar progress is in graphs of the comparison of the other horizontal angles with the air temperature. analysis of the horizontal angles brought interesting results. periodic progress of values of horizontal angle was proved with an annual period and amplitude of 0.5 mgon. the obtained linear trend is so small that it can be neglected. it is seen from the all graphs with values of the horizontal angle that the calculation of the horizontal angles reduces almost all systematic errors that were observed in the values of the horizontal direction. only random errors stay in data that cause the variance of the values of the horizontal angle visible in graphs. furthermore, a comparison of the horizontal angles and air temperature was made graphically. this comparison shows the annual progress of the horizontal angle corresponds to the annual progress of temperature. man can conclude that periodic changes of the horizontal angle detected by analysis are most likely caused by the influence of temperature changes during the year. geoinformatics fce ctu 15(1), 2016 24 j. vaněček: analysis of the variations of measured values figure 10: comparison of approx. curves of horizontal angle and temperature, ref. points no. 93 and 91. analysis of the variations of the measured values of the zenith angle the next tested measured value is zenith angle. this analysis was made in the same way as the horizontal angle. analysis was performed for all the measurements for the entire period of time, in which the data are available. due the forced centering the position of the total station, the position of the instrument does not change and the height of the instrument varies very little. if we consider a value of 0.01 m as the extreme value of the height change of the instrument then the measured zenith angle to the nearest reference point no. 95 (1157 m) will change about 0.55 mgon. the value from the perspective of the long-term analysis and variance of measured values is negligible. only measured data to the reference point no. 92 was not included in the analysis for the first 11 months of measuring, because there is a big jump in the measured values for some unknown reason. the analysis was performed in the same way as the analysis of the horizontal angle. there was used equation (1) for an approximation again, because only one main period is expected in measured data. there were used daily averages of measured values of the zenith angle in the calculation of the coefficients of l-h function. the final linear coefficients were tested by the tests of statistical hypothesis with the null hypothesis, that the linear trend is null. the important coefficients of l-h functions are listed in table 8, where are for all reference points calculated values of linear coefficient (linear trend), amplitude and period including geoinformatics fce ctu 15(1), 2016 25 j. vaněček: analysis of the variations of measured values figure 11: approximation curve of zenith angle, ref. point no. 91. their standard deviations. an approximately annual period follows from the resulting values in addition to reference point no. 95, where was released 2.5-year period. the annual period should again point to a dependency on the change of air temperature. it is interesting that for all reference points can be observed a decrease in measured values at the end of the time period, but it is not clear the cause of the phenomenon. there is also important result that the variance of the daily averages is 1 mgon and the calculated amplitudes are significantly smaller. the smallest variance of values of daily averages has a zenith angle to the reference point no. 93, which was expected. reference point no. 93 has the best stabilization in terms of spatial stability (a prism on the low metal bar) comparatively to other reference points (prisms on the lattice towers). anyway, the variance of the daily averages is small for all reference points, which shows that the largest fluctuations of the measured values are caused by changes in the weather conditions during the day. the values of the linear trend shown in the table 8 are small. the results of the tests of statistical hypotheses are shown in table 9 and the null hypothesis is rejected for all reference points, but the linear trend can be neglected on the basis of accuracy of measurements. the measured values of the zenith angle to reference points were also compared with the measured values of the air temperature in a similar way as the horizontal directions and horizontal angles. the comparison was made by graphical comparing of the approximation curve of temperature geoinformatics fce ctu 15(1), 2016 26 j. vaněček: analysis of the variations of measured values table 8: significant coefficients of l-h function calculated in approximation of zenith angles. ref. point coefficient value coefficient value 91 b [ gon day ] 6.84e-07 c [gon] 2.9e-04 σb [ gonday ] 1e-09 σc [gon] 6e-07 b [ mgon year ] 0.25 t [day] 368.9 σb [ mgonyear ] 4e-04 σt [day] 0.1 92 b [ gon day ] 1.90e-07 c [gon] 2.8e-04 σb [ gonday ] 8e-10 σc [gon] 4e-07 b [ mgon year ] 0.07 t [day] 374.9 σb [ mgonyear ] 3e-04 σt [day] 0.1 93 b [ gon day ] 5.10e-07 c [gon] 2.3e-04 σb [ gonday ] 2e-09 σc [gon] 8e-07 b [ mgon year ] 0.19 t [day] 389.3 σb [ mgonyear ] 6e-04 σt [day] 0.3 94 b [ gon day ] 7.41e-07 c [gon] 4.5e-04 σb [ gonday ] 9e-09 σc [gon] 9e-07 b [ mgon year ] 0.27 t [day] 383.0 σb [ mgonyear ] 3e-03 σt [day] 0.7 95 b [ gon day ] -2.18e-07 c [gon] 4.1e-04 σb [ gonday ] 4e-09 σc [gon] 9e-07 b [ mgon year ] -0.08 t [day] 882.7 σb [ mgonyear ] 2e-03 σt [day] 1.5 table 9: results of the tests of statistical hypotheses of linear trend of zenith angle. zenith angle f fα=0.5 t tα=0.25 n� 91 354000 4 595 2 1579 92 52549 4 229 2 1278 93 109179 4 330 2 1606 94 6556 4 81 2 386 95 2474 4 -50 2 1130 and approximation curve of the zenith angle. approximation curves were calculated again for the daily averages of temperature and the zenith angle. the figure of the comparison of the air temperature and the zenith angle to reference point the no. 93 can be seen in fig. 12. it is obvious from this graph that the approximation curve of the zenith angle is shifted to the right compared the approximation curve of the temperature. analysis of the variations of the measured values of the slope distance the initial assumption of the analysis was to perform this analysis as well as the analysis of the horizontal angle or zenith angle. it had to be replaced by the new approach. the measured data was divided into several time intervals as in the analysis of the horizontal direction, because a measured distance has significantly different size in each time interval. geoinformatics fce ctu 15(1), 2016 27 j. vaněček: analysis of the variations of measured values figure 12: comparison of approx. curves of zenith angle and temperature, ref. point no. 93. the differences of slope distances between time intervals are almost 3 cm. measurements were divided into 7 time intervals (see table 10), some of which are the same as in the case of the horizontal direction. table 10: time intervals used in the analysis of slope distance. interval number beginning of interval end of interval number of days 1 1.1.2006 0:00 26.9.2006 0:00 268.0 2 29.9.2006 0:00 7.2.2007 10:00 am 131.4 3 7.2.2007 10:00 am 30.11.2007 10:00 am 296.0 4 28.1.2008 0:00 23.10.2008 0:00 269.0 5 27.10.2008 0:00 16.9.2009 12:00 am 324.5 6 21.10.2009 11:00 am 11.3.2010 9:00 am 140.9 7 11.3.2010 9:00 am 2.5.2010 0:00 51.6 the analysis of the slope distance was performed again by an approximation of the daily averages of the measured values by the l-h function with one periodic member (1). there was a big problem to establish the approximate unknowns for calculating approximations in the seventh time interval, which is long just 51 days. calculated linear coefficients of l-h function were tested by the test of statistical hypotheses (3) in the same way as in the previous analyses. fig. 13 shows the daily averages of the slope distance for reference point no. 91 and their geoinformatics fce ctu 15(1), 2016 28 j. vaněček: analysis of the variations of measured values figure 13: approximation curve of slope distance, ref. point no. 91. approximation curves. values of some coefficients of the l-h function are listed in table 11 for all reference points. the results of the tests of statistical hypotheses are listed in table 12. the results of the analysis of the slope distance are not very clear. it is obvious from the figures that the reference point no. 92 shows the biggest variance and an indication of periodic changes. this reference point changes its position probably the most. the other points have different calculated values of amplitudes, periods and slopes of the regression lines in a comparison of time intervals as well as reference points. so, we can assume that the measured values of the slope distance are affected by measurement errors and changes in the position of reference points, which is the most evident for point 92 and the most stable is for points 93 and 95. we can also conclude from the similarity of the approximation of curves for difference points that the measurement errors and the movements of reference points are similar for all points. however, it is not easy to determine ratio of these two errors in the resulting error. furthermore, it is necessary to consider, that this analysis is affected by the long-term movements of the objects, because the analysis uses the daily averages of the measured values. therefore the daily movements of the target due to the sun exposure are not visible in the figures. the values of the linear trend are different in size and direction. the calculated size of the linear trend is small and can be neglected despite the fact that the linear trend was confirmed tests of statistical hypotheses. as well as the other analyzed variables, the approximation curves were compared with the geoinformatics fce ctu 15(1), 2016 29 j. vaněček: analysis of the variations of measured values figure 14: comparison of approx. curves of slope distance and temperature, ref. point no. 92. approximation curve of the temperature. the fig. 14 shows the approximation curve for the reference point no. 92. there is not obvious the dependence values of the slope distance on the temperature. so we cannot declare that measured values of the slope distance significantly dependent on temperature. conclusion this article describes the analysis of data measured by total station. the analysis should reveal a development of measured values during the long-term continuous monitoring. there were tested measured values of a horizontal direction, a zenith angle, a slope distance and calculated values of horizontal angles in the analysis. input data were obtained from the monitoring system situated in the čsa mine for the time period from january 2006 to july 2010. the result of the analysis of the horizontal direction is the found linear trend of the measured data of size approximately 1 mgon/month, which is probably caused by the first face only measurement. furthermore, it was found that the measurements of the horizontal direction are influeced by many systematic and random errors, which disproportionately increases the variance of the measured values. a larger variance is especially noticeable in the winter and the summer, when the temperature gradient is the largest between the temperature inside and outside the shelter of the total station, which is caused by air condition and heating in geoinformatics fce ctu 15(1), 2016 30 j. vaněček: analysis of the variations of measured values table 11: significant coefficients of l-h function calculated in approximation of slope distance. point coefficient number of time interval (corresponds to table 10)1 2 3 4 5 6 7 91 b [ mm month ] 0.018 0.332 0.426 -0.094 0.413 0.874 0.934 σb [ mmmonth ] 0.0001 0.0001 0.002 0.0002 0.0001 0.0009 0.002 t [day] 276.14 63.02 556.92 107.26 248.13 57.40 2.96 σt [day] 0.04 0.01 0.44 0.07 0.08 0.02 0.00 92 b [ mm month ] 95.241 -2.658 -1.248 -1.170 -0.454 3.951 -0.137 σb [ mmmonth ] 0.007 0.0005 0.0008 0.0005 0.001 0.0007 0.017 t [day] 1802.45 52.01 333.35 96.08 295.82 9.72 9.40 σt [day] 0.07 0.03 0.18 0.05 0.15 0.01 0.02 93 b [ mm month ] -0.011 -0.216 -0.573 0.136 0.242 0.141 1.263 σb [ mmmonth ] 0.0001 0.0004 0.0002 0.0001 0.0007 0.0006 0.0001 t [day] 288.33 19.94 278.90 251.38 161.82 23.65 23.57 σt [day] 0.06 0.03 0.11 0.06 0.41 0.01 0.002 94 b [ mm month ] -0.055 -0.057 -0.092 σb [ mmmonth ] 0.002 0.0002 1.2 t [day] 193.72 36.36 36.58 σt [day] 1.44 0.02 40.40 95 b [ mm month ] -0.096 -0.071 -0.094 0.222 0.674 σb [ mmmonth ] 0.0002 0.005 0.0001 0.0001 0.0001 t [day] 275.80 216.71 98.61 6.78 0.00 σt [day] 0.10 3.32 0.01 0.0002 0.01 the shelter. finally, there was made a comparison of the development of the measured values of the horizontal direction and temperature, but between these two variables was not proved any dependency. a similar analysis for horizontal angles was carried out on the basis of the results of the analysis of the horizontal direction. this analysis was performed for the entire time period (the separation of measured values to the time intervals was not used). most errors involved in the horizontal direction were deducted by the calculation. the analysis detected negligible linear trend and revealed annual period with amplitude 0.5 mgon. it was also carried out a comparison of the approximation curves of the horizontal angle and temperature, which had very similar progress. it can be concluded that the main cause of periodic changes in the values of the horizontal angle is just the air temperature. similar results were obtained in the analysis of the zenith angle. in this case, the linear trend was detected, too. the calculated linear trend is negligible in terms of the precision of the measurements. annual periodic changes were detected in measured data, the amplitude of the periodic changes is up to 0.5 mgon. the periodic changes in the values of the zenith angle approximately correspond to changes of temperature during the year. so, the air temperature changes can be considered as the main cause of the periodic changes of the measured values of the zenith angle. the results of the analysis of the slope distance were influenced by the need to divide the measured values to 7 time intervals, because between the time intervals are large differences geoinformatics fce ctu 15(1), 2016 31 j. vaněček: analysis of the variations of measured values table 12: results of the tests of statistical hypotheses of linear trend of slope distance. point variable time interval1 2 3 4 5 6 7 91 f 8.0e+03 7.6e+06 5.9e+04 1.5e+05 1.2e+07 9.6e+05 2.5e+05 fα=0.5 4 4 4 4 4 4 4 t 261 2761 243 -389 3485 980 495 tα=0.25 2 2 2 2 2 2 2 n� 253 117 292 264 314 109 47 92 f 1.8e+08 3.2e+07 2.6e+06 4.8e+06 2.1e+05 3.1e+07 6.3e+01 fα=0.5 4 4 4 4 4 4 4 t 13388 -5684 -1606 -2181 -454 5564 -8 tα=0.25 2 2 2 2 2 2 2 n� 252 118 292 263 316 124 47 93 f 13151 339437 9.7e+06 4.7e+06 1.1e+05 4.9e+04 8.6e+08 fα=0.5 4 4 4 4 4 4 4 t -115 -583 -3121 2171 332 221 29319 tα=0.25 2 2 2 2 2 2 2 n� 256 117 292 264 317 129 47 94 f 973 121910 0 fα=0.5 4 4 5 t -31 -349 0 tα=0.25 2 2 2 n� 253 117 9 95 f 3.7e+05 3.2e+06 2.2e+02 4.8e+08 4.6e+26 fα=0.5 4 4 4 4 4 t -610 -1783 -15 21830 2.1e+13 tα=0.25 2 2 2 2 2 n� 206 318 264 128 36 in measured values. split into as many intervals decreased the possibility of finding periodic changes in the data. the linear trend was proofed in all time intervals, but the size and sign of the linear trend is different in the time intervals. we can say that the data measured by the total station, are influenced by many effects. these effects can be probably significantly reduced by installing automatically opened windows in the shelter or uninstalling the heating and air conditioning from the shelter. changes in the measured data with annual period are small and in terms of the accuracy of the measurements do not have greater importance. acknowledgement supported by grant sgs 2017 – optimization of acquisition and processing of 3d data for purpose of engineering surveying, geodesy in underground spaces and laser scanning. geoinformatics fce ctu 15(1), 2016 32 j. vaněček: analysis of the variations of measured values references [1] v. ballu et al. “a seafloor experiment to monitor vertical deformation at the lucky strike volcano, mid-atlantic ridge”. in: journal of geodesy 83.2 (2009), pp. 147– 159. doi: 10.1007/s00190-008-0248-3. [2] t. beran et al. “measurement of deformations by mems arrays, verified at submillimetre level using robotic total stations”. in: geoinformatics fce ctu 12 (2014), pp. 30–40. doi: 10.14311/gi.12.6. [3] a. berberan, m. machado, and s. batista. “automatic multi total station monitoring of a tunnel”. in: survey review 39.305 (2007), pp. 203–211. doi: 10.1179/003962607x165177. [4] c. castagnetti et al. “multi-sensors integrated system for landslide monitoring: critical issues in system setup and data management”. in: european journal of remote sensing 46 (2013), pp. 104–124. doi: 10.5721/eujrs20134607. [5] y. g. he and c. j. zhao. large-scale bridge distortion measuring technique discussion. international conference on mechanics and civil engineering (icmce), wuhan, peoples republic of china. dec. 2014. doi: 10.2991/icmce-14.2014.120. [6] b. klappstein, g. bonci, and w. maston. implementation of real time geotechnical monitoring at an open pit mountain coal mine in western canada. international multidisciplinary scientific symposium universitaria simpro 2014, petrosani, romania. oct. 2014. [7] p. stanislav and j. blín. “technical support of the service of automatic total station leica tcr 2003a in operating conditions of the company mostecká uhelná a.s.” in: acta montanistica slovaca 12.special issue 3/2007 (2007). issn 1335-1788, pp. 554– 558. [8] m. štroner et al. “prague castle area local stability determination assessment by the robust transformation method”. in: acta geodynamica et geomaterialia 11.4 (2014), pp. 325–336. doi: 10.13168/agg.2014.0020. [9] c. tse and j. luk. design and implementation of automatic deformation monitoring system for the construction of railway tunnel: a case study in west island line. joint international symposium on deformation monitoring, hong kong, china. feb. 2011. [10] r. urban. surveying works during the deformation measurement of buildings. first printing. isbn 978-80-01-05786-5. ctu publishing house, prague, 2015, p. 227. geoinformatics fce ctu 15(1), 2016 33 geoinformatics fce ctu 15(1), 2016 34 applications of the galileo system in civil engineering applications of the galileo system in civil engineering prof. leoš mervart department of advanced geodesy faculty of civil engineering, ctu in prague e-mail: mervart@fsv.cvut.cz key words: galileo system, civil engineering, research plan msm 6840770032 research plan description subject and goal of the research plan the development of the first satellite positioning systems – also known as global positioning systems (gps) – goes back to the 1960s. their significance for technical practice started to increase together with gradual completion of the navstar (navigation satellite timing and ranging) gps system during the 1980s. since then, the number of applications has been continuously growing, and gps have become indispensable in various areas of human activity. with the development of new gps applications, however, certain limitations of the presently single fully operational system, navstar gps, have started to be manifested, which are mainly caused by the fact that the system was originally designed for the needs of the us armed forces, and its inventors did not bear in mind the hundreds of its various civil applications. for this reason, one of the priorities of the european union member countries was to develop its own satellite positioning system, which (unlike the united states’ system) would be primarily designed for a whole range of very diverse civil applications. during the complicated and long preparatory phase of the project, both its overall concept and the name of the positioning system itself went through changes. negotiations between the european union and the united states, aimed at ensuring so-called interoperability of the existing and the new positioning system, were also difficult. at the present time, it may be stated that there are no more political obstacles on the way to the construction of a european positioning system, and its detailed concept has been approved. in 2006, the project advanced from the so-called development phase to the implementation phase. the operating phase should be reached in 2008. the positioning system of the european union is based on two projects: egnos (euro geostationary navigation overlay service) project. this is a joint project of the european space agency (esa) and the european commission, which will (as based on a plan of 2005) comprise three geostationary satellites. galileo project. the galileo system (also representing a joint esa and european commission project) is a highly ambitious project, which, after its completion, should represent the latest technology for precise positioning. from july 1st 2003 to the end of 2005, the project was in the socalled development phase, while 2006 set off its so-called implementation phase, and from geinformatics fce ctu 2006 12 applications of the galileo system in civil engineering 2008 on the system should be fully functional. the system involves a total of 30 satellites on three orbits (inclination to the equator 56 degrees, distance from the earth 23616 km). the agreement on “the support, deployment and exploitation of satellite positioning systems galileo and navstar gps” signed between the usa and the european union in 2004 (on june 26th) has cleared the way for user interoperability and radiofrequency compatibility of both systems. the galileo system presently enjoys one of the top priorities in all eu member states. the czech government decree no. 218 of 23.2.2005 on organizing active participation of the czech republic in the galileo programme declares the readiness of the czech republic to support its own business and research subjects. the design of services and signals within the galileo system was subject to long discussions and numerous changes. at the present time, however, it is evident that one of the signals of the galileo systems will be in the l1 high-frequency range. this fact makes way for simple and cheap receivers capable of working with both systems – navstar gps as well as galileo. at the same time, this paves the way for considerable precision and mainly reliability improvement in positioning while using a combination of measurements coming from both navigation systems. a direct consequence of this qualitative leap forward to be expected is tremendous development of new applications, including applications in civil engineering. we presume that due to the above-mentioned facts, it is necessary to focus the efforts of research staff members of the faculty of civil engineering, ctu, on the research and development of the galileo system applications in the fields of geoinformatics, landscape and civil engineering. we are convinced that in order to launch a project of such research, we possess a top-quality research team composed of experts in the field of satellite positioning systems, informatics, geodesy, highway engineering, building constructions and other civil engineering branches. we are also convinced that the submitted project is appropriately targeted, well timed (in relation to the galileo system’s construction), and that its implementation will bring considerable economic benefits. the intended research plan consists of three mutually linked up parts: 1. research and development of methods for the processing of signals from galileo satellites, problems of combining measurements coming from the galileo system with the existing global positioning system navstar gps, specific features related to the galileo system’s applications in the czech republic and linkage to current observation grids of the czech republic. 2. research and development of efficient processing methods of position-related information provided by means of the galileo system, its visualization, development of database and information systems (gis) based on the data obtained through the galileo system. 3. research and development of the galileo system’s application in individual civil engineering branches: � monitoring of deformations of bridge structures geinformatics fce ctu 2006 13 applications of the galileo system in civil engineering � monitoring of displacements of building structures by combining measurements coming from the galileo system with laser scanning methods � building machinery control � long-term monitoring of displacements of tram and railway tracks � prevention of risks in transporting hazardous freight � searching and development of new applications of satellite positioning systems in civil engineering � searching and development of new applications of satellite positioning systems and methods of remote sensing of the earth in water management. one of the principal targets of the submitted research plan, therefore, is complex development of civil engineering applications of the galileo system. the term “civil engineering applications” denotes various technological procedures which imply obtaining position-related information on building structures, building machinery control, optimization of logistic problems during construction processes etc. as their component part. these will, on the one hand, include applications, which are presently being solved by using other technologies (e.g. terrestrial measurements). in such cases, the objective will be to develop technological procedures based on the galileo system, which are more cost-efficient, more precise or less risky for the staff executing the measurements. on the other hand, the qualitative leap in the precision and reliability of positioning, incorporated in the galileo system, will pave the way for completely new applications and technologies, which otherwise could not be made available by using existing means. these applications cannot be predicted one by one, and thus enumerated within the research plan proposal. nevertheless, we presume that it is in particular the search for them and their development that should become a significant part of the proposed research plan. the submitted project, however, is not focused solely on the application level of the galileo system. the project is submitted by staff members of the faculty of civil engineering, ctu, representing, to a large extent, surveyors and cartographers (geodesy and cartography being one of the branches studied at the faculty). therefore, another major objective of the project is the development of algorithms, methods and software applications for processing original measurements coming from the galileo system, and integration of the results obtained by means of this system into information systems. individual applications in civil and landscape engineering may be linked up only to the results of this development. the project, therefore, represents a certain synthesis of research in the branch of satellite surveying and geoinformatics with specific applications in civil and landscape engineering. present level of knowledge and research activity in sphere which is subject of the research plan, from both international and national standpoints the part dealing with the subject and goal of the research plan makes it clear that in order to achieve the project’s objectives knowledge coming from more branches of science and geinformatics fce ctu 2006 14 applications of the galileo system in civil engineering technology is necessary. this section gives a brief outline of the present level of research activity and knowledge in the fields essential for solving the research plan: satellite surveying and navigation satellite surveying, global positioning systems theory and development of algorithms and software tools for the processing of satellite observations is a kind of “starting point” of the submitted project as satellite observations represent the initial data source on which all intended applications are based. satellite surveying itself cannot exist without links to other specializations falling under the branch of so-called geodetic surveying. their common task is to allow positioning of static or moving objects within an exactly defined coordinate and time system. as any positioning is based on measurements carried out in a specific physical environment, study of physical characteristics of the earth and study of time-related changes in these characteristics are also component parts of geodetic surveying. the problems of satellite navigation have been a subject of interest of the staff members of the department of advanced geodesy of the faculty of civil engineering, ctu, since the start of the 1990s. the head of department, prof. mervart, is a co-author of two significant software systems for processing navstar gps observations – so-called bernese gps software (in cooperation with the astronomical institute, university of berne, switzerland) and the rtnet (real-time network) programme used by the japanese institute of geography for monitoring the japanese geonet network (a network of approximately 1200 permanent gps stations). prof. mervart and dr. lukeš are authors or co-authors of numerous scientific publications devoted to global positioning systems. other research team members – prof. kostelecký, dr. vondrák, and dr. pešek are experts in the field of the national reference frame, geodetic astronomy and coordinate and time reference systems. the problems of reference frames are an important part of research as individual positioning systems can work in variously defined and implemented reference frames whose correct transformation and subsequent conversion of results into systems used in the czech republic is the necessary condition for the use of satellite positioning systems for precise applications in technical practice. research activity in the field of satellite surveying is indispensable without wide international cooperation. the above-mentioned research team members are engaged in international cooperation projects in the framework of bilateral agreements with our foreign partners (mainly the astronomical institute, university of berne) and international scientific organizations – mainly international gnss service and international earth rotation and reference systems service (iers). among domestic research workplaces, our partner is mainly the research institute of geodesy, topography and cartography. informatics, geoinformatics, digital cartography and geographic information systems the second pillar of the intended research plan is a group of sciences which (if we want to sum them up by a one-word term) may be referred to as “geoinformatics”. this modern branch of science applies the knowledge of informatics – a science on processing and handling geinformatics fce ctu 2006 15 applications of the galileo system in civil engineering information – to fit the needs of geodesy, cartography and other scientific and technical disciplines dealing with measurements, mapping or (as is the case of civil engineering) also transformations of the earth’s surface. in our concept, geoinformatics is a very wide term which serves, to a certain extent, as an umbrella for cartography, photogrammetry, remote sensing, mapping and land register. the significance of geoinformatics is also testified by the fact that “geoinformatics” is the name and content of a newly accredited branch of study at the faculty of civil engineering, ctu. instruction in this branch will start in the 2006/2007 academic year. the department of mapping and cartography of the faculty of civil engineering, ctu, is a top workplace in the fields falling under geoinformatics. the department also comprises the laboratory of remote sensing, whose research is focused on several areas. one of them is the monitoring of time-related changes in landscape, which may be determined from the data obtained through remote sensing – satellite data (optical and radar data) and aerial data (aerial photogrammetric images). the process of image evaluation necessarily requires precise localization of monitored changes. the galileo system and the data provided by it will allow data acquisition with a substantially greater position precision and easier exploitation of the knowledge obtained in practice. the laboratory of remote sensing has been involved in the problems of differential interferometry for several years. this method allows collecting information on changes in the position of a territory on the earth’s surface. a supporting tool for evaluating the results processed by differential interferometry is e.g. gis where based on various sorts of input data a theoretical possibility of the existence of areas of subsidence is investigated. in this way, the values taken over from geological, mining and other background materials have been compared, which, however, cannot be used as absolutely reliable for confirming or denying subsidence. the galileo system will allow continuous position monitoring in selected localities for several years. these measurements will be compared with the results of interferometric evaluation. the precondition, therefore, is to use the galileo system for measurements on pre-selected localities. such data will be regularly evaluated, and the information entered into gis. another workplace of the department of mapping and cartography of fce, ctu, is the laboratory of photogrammetry. the laboratory activity in the last five years has been focused mainly on the use of terrestrial photogrammetry for documenting historical monuments where a number of significant achievements have been made while working on international projects as well. higher forms of digital photogrammetric evaluation systems using the principles of virtual reality are represented in the laboratory at four stations. the pilot project of the laboratory in the long run is work on the photopa system, which presently represents a relatively extensive photogrammetric and surveying database of minor historical monuments. collecting this kind of data involves geoinformation elements, and the european galileo positioning system is presumed to be used for locating the objects. engineering geodesy engineering geodesy is an application of geodetic methods in industry and civil engineering. among the principal tasks of engineering geodesy there is complete geodetic site management – from works carried out during the design phase of construction through site surveying to documentation of its as-built version and, in some cases, even long-term monitoring of its geinformatics fce ctu 2006 16 applications of the galileo system in civil engineering shifts and deformations. engineering geodesy is characterized by high demands for measurement precision and also by the fact that the measurements are carried out in very difficult conditions. the use of the latest devices is often the only way to fulfil the accuracy requirements observing, at the same time, the health and safety rules and reducing the risks of occupational accidents. in this respect, the use of satellite positioning systems in laying out major construction projects, the laser scanning method or the methods of automated building machinery control must be mentioned. the current positioning system navstar gps has already been successfully used in some of the applications mentioned above. the use of the galileo system, however, would result in reaching a greater precision of results replacing thus classic terrestrial methods in engineering geodetic applications setting high demands mainly for precise positioning. the impact of the new galileo system would be still more significant in the cases where measurements are carried out in unfavourable conditions (e.g. limited visibility due to existing development etc.). with the number of satellites being more than twice bigger (a total of 54 satellites in simultaneous application of both the navstar gps and the galileo systems as compared to only 24 navstar gps satellites) highly precise measurements under such difficult conditions would be feasible. the problems of on-site geodetic measurements are studied by the department of special geodesy of the faculty of civil engineering, ctu, and also by the department of geodesy and land consolidation, fce ctu. doc. blažek, the head of the latter department, is engaged in measurements of bridge deformations using optical methods. ing. štroner, phd. from the department of special geodesy, fce ctu, deals with the method of laser scanning of structures. doc. hampacher is an expert in mathematical processing of geodetic measurements using the adjustment calculus. building mechanics, highway engineering, environmental engineering, water management and water structures the department of building mechanics of the faculty of civil engineering, ctu, has been, among other things, involved in the long-term monitoring of static and dynamic behaviour of prominent building structures and detection of their excessive static deformations and dynamic deflections. the current accuracy of satellite observations using navstar gps is, as a rule, lower that the precision required during the above-mentioned monitoring. it may, however, be expected that after higher accuracy has been achieved by introducing the galileo system, terrestrial measurements may be, in some cases, replaced with satellite observations with considerable economies and enhanced health and safety. evaluation of the results achieved would be greatly facilitated by the fact that repeated measurements could be replaced with measurements performed by permanent satellite receivers eliminating thus the effect of periodic phenomena (e.g. temperature fluctuations during the day or year) on the total determined behaviour of building structures. the department of railway structures, fce ctu, among other things, deals with monitoring displacements of selected sections of tram and railway tracks – e.g. trial sections with new structural elements. these displacements are presently monitored by means of terrestrial methods. the use of satellite methods would be greatly beneficial both in terms of cost-effectiveness and health and safety. the condition is not only high precision of the measurements, but also a capability of reaching this high precision rate in conditions standard geinformatics fce ctu 2006 17 applications of the galileo system in civil engineering for railway and tram tracks – limited visibility of satellites in cuttings, impaired reception of signals due to vegetation etc. the department of sanitary and ecological engineering, fce ctu, is engaged in modelling distribution networks and rainfall-runoff processes in urbanized watersheds and evaluation of the ecological state of water courses and water/supply reservoirs, which requires precise information on the type and size of surfaces in the respective watershed, including their altitudinal surveys and boundaries and delimitation and precise location of structures, e.g. surface attributes of drinking water supply systems and sewage systems, which could be easily made available by using satellite methods. the department of steel and timber structures, fce ctu, participates in solving tasks aimed at monitoring steel and timber structures showing increased risks of excessive deformations and shifts. such hazards are characteristic e.g. of historic spire timber roof structures. the majority of structures suffer from gradual material degradation of the bearing structure and in extreme climatic conditions (wind load) there are risks of their failures and irreparable damage. that is why measurements applying the proposed method and millimetre precision rate are of invaluable here, and they may very well identify structural failures. the use of satellite methods for these structures is beneficial as these structures are usually taller than the surrounding development. for this reason, any other measurement in such conditions sets high demands, while satellite receivers, on the contrary, provide considerable benefits as there is no danger of the receivers being shaded. the use of the proposed method would also significantly enhance measurements on high masts, towers, chimneys and other similar structures. in such structures, second limit state (serviceability limit state) is often relevant. here, dynamic wind effects are mostly in question, which must be monitored both in the actual wind direction and in the perpendicular direction, and cylindrical structures in particular may show resonance vibrations due to wind effects and be prone to fatigue damage. wind effects may be, in a very complicated way, monitored in wind tunnels (scale models are used) where actual wind effects are only simulated. measurements using satellite receivers would provide very valuable information on the basis of which actual wind effects could be analysed on real structures. the proposed project would significantly affect development in this area of structural design. the department of geotechnics is ready to participate in solving the research plan by using knowledge referring to foundation condition of structures (rock mass condition in underlaying subsoil of buildings) and their foundation structures, including geotechnical causes of structural failures. shape deformations of buildings may be also caused by rock mass dynamics, regardless of the fact if these are autonomous processes in the massif or part of interaction of the massif with buildings. reliable results can be obtained only by complex assessment. the department is well equipped by material and staff to monitor and evaluate the rock mass dynamics in interaction with buildings. cooperation with other participating researchers is expected to result in obtaining valuable geodetic and geotechnical background materials for the evaluation of various types of localities; the research team of the department of geotechnics will also provide engineering geological and geotechnical background materials for other planned works. geinformatics fce ctu 2006 18 applications of the galileo system in civil engineering sub-goals of the research plan 1. system analysis and setting principal goals (sa) the process is a follow-up to the results achieved by the researchers prior to the project’s launch. it runs at the start and, at the same time, during the whole time of the project duration to ensure that the partial tasks set are updated in relation to international development in the given problem area. a system solution to the project comprises a basic analysis of problems to solve, an application analysis and subsequent setting of demands for the functionality and properties of subsystems to ensure mutual linkage of individual processes. 2. research, development, testing and optimization of computation algorithms for processing information from galileo satellites (ga) the process is one of principal tasks of the proposed research plan. it is a follow-up to previous researchers’ works dealing with programme development for processing navstar gps measurements in real time. in this part of the research plan we intend to: � develop a programme for processing so-called phase measurements of the galileo system � assess the effect of corrections of the egnos system on the determined position of the receiver, design an optimum method of using the egnos system for intended technical applications � investigate problems related to a combination of navstar gps and galileo measurements (different reference systems etc.) and prepare software tools for solving these problems a new generation of satellite signal receivers of is presently being prepared. these new receivers differ from the existing ones in that some hardware elements are replaced with firmware elements. the receivers will be fitted with efficient processors and will become, to a certain extent, universal. by modifying or changing firmware, the receiver will be able to be used for measurements with different global positioning systems or their combinations (navstar gps, galileo and potentially others). the receivers will possess their own operating system allowing running user programmes directly in the receivers. this paves the way for developing new applications to optimize processing original raw data, solve communication with other devices within a given technology etc. research and development of such applications, which are on the boundary line between firmware and user applications, will also form part of our work. 3. integration of national reference frame into common european reference frame (pz) research is targeted on the implementation of the european terrestrial reference frame etrf89 on the territory of the czech republic with centimetre precision and its linkage to existing centimetre precision horizontal and vertical coordinate control systems (s-jtsk, s43/83, čsns). only by observing this standard will it be possible to provide localization in all coordinate reference frames with the desired accuracy. as satellite movement is primarily described by means of laws referring to celestial mechanics in a celestial quasi-inertial celestial system, it will also be necessary to refine precision in transformation relations between this celestial and the rotating terrestrial reference system (precession, nutation, universal time, geinformatics fce ctu 2006 19 applications of the galileo system in civil engineering pole movement). some parameters of the earth orientation (precession, nutation, universal time), however, cannot be obtained only from satellite observations, and a combination with vlbi observations is necessary. therefore, research will be also aimed at developing the technique of these combinations. to build the integrated terrestrial reference frame requires, apart from other prerequisites, refining information on the quasigeoid surface on the domain of the czech republic by processing heterogeneous data, verifying its precision based on the “gps-levelling” method, developing an algorithm of etrf89 – s-jtsk and etrf – s42/83 transformation and supplying relevant software with regard to its incorporation into gis software. 4. applications of modern methods of numerical mathematics (nm) study of effective and numerically stable methods for solving mathematical problems that occur in processing observations from satellite positioning systems and in processing huge geoinformation files. the research plan proposal does not make it possible to precisely predict which mathematical problems are going to arise during the project solution. generally we can expect that we will face problems stemming from solutions of large linear equation systems (when processing observations from satellite positioning systems the rank can be expected in the order of 100 thousand unknown). what is demanded is not only high numerical stability, but also high solution speed. optimal algorithms can be achieved when a-priory knowledge of the linear equation system structure is taken into account (e.g. in the case of “sparse” equation systems or systems with some parameters strongly dependent etc.) when processing satellite data in real time various modifications of the kalman filter are mostly used (the main reason is effectiveness of the solution). in general, we can say that such filtering can be numerically highly unstable. optimization of filtering algorithms in terms of numerical stability is another goal of this project section. 5. monitoring of deformations of bridge structures (mk) this area is considered as one of the most promising applications of satellite observations in the building industry. the objective of the project is verification of the serviceability of the new european galileo positioning system for the monitoring of static and dynamic behaviour of significant building structures and for the detection of their excessive static deformations and dynamic deflections. in terms of serviceability, operating ability, long-term reliability and durability of large-span pre-stressed bridge structures, the highly topical issue of today is the problem of time-related permanent deformation growth. social relevance of the problems affecting the operating ability of bridges is immense, and the resulting costs economically greatly exceed the costs due to static failures. experience shows greater values of real deflections as compared to predictions based on calculations and their long-term growth in time – actual long-term deflections are greater than those specified by standard calculations. the causes of this phenomenon are many, and they need to be objectively analysed. one (though not the only) significant factor affecting deflection development is creep and differential shrinkage of concrete. due to the fact that these are highly complex phenomena including interaction of a number of factors at different microstructure levels, which are affected by multiple variables, the mathematical expression of the development of these geinformatics fce ctu 2006 20 applications of the galileo system in civil engineering phenomena is necessarily rather complicated. the research team of the department of concrete bridge structures and bridges, fce ctu, has achieved concrete results representing both original achievements in the theory of building structures, and support for design practice and monitoring of the deformation development of large bridges. the achievements include new mathematical models and computation methodology, generalization of results and practical recommendations serving as an efficient tool for reliable and economical design of structures, for reaching economies of construction materials, energy and resources, not only for construction, but also for maintenance, repairs and reconstruction. information on the actual deformation time pattern of bridge structures of pre-stressed concrete is essential for the calibration of theoretic predictions. it may be used both for the assessment of condition of monitored bridge structures, but also for the verification of the truthfulness of mathematical models of creep prediction. example: deflection development is monitored on the bridge on the d8 motorway over the ohře river near doksany village. an increment of permanent deflection of about 3 cm was measured in five years at the midpoint of the longest span of this bridge, which is 130 meters long. such big deformations lend themselves to speculations about their measuring using the methods of satellite geodesy, for which, as compared to standard methods, at least one level higher precision, but also continuous monitoring during the entire long-term measurement period could be expected. load due to temperature variations is a significant part of the total stress acting on large building structures. information on deformations of a structure caused by temperature changes that would be obtained by using the galileo system and supplemented by measuring the temperature variations, which caused the deformations, may serve for refining the knowledge used in background materials, design models and techniques for calculations and judgement of the effect of temperature variations on the reliability of investigated structures. example: today, temperature variations of several bridges in the czech republic are systematically being observed with the purpose of verification of values specified in the assumed european standard en 1991-1-5. the department of building mechanics, fce ctu, is observing temperature variations of a pre-stressed concrete bridge over the sedlický brook on the d1 motorway in the middle cross-section of its longest span that is 75 m long. the deflection of this bridge structure due to temperature variations reaches ca 1 cm. it is evident from the above-mentioned facts that the galileo system could be used for: � monitoring of the increment of permanent deflections of important pre-stressed concrete bridges, � monitoring of the growing capacity to yield of important building structures caused by gradual degradation of their bearing structure, � real-time monitoring of changes of basic natural frequencies of important building structures induced by gradual degradation of their bearing structure, � detection of abnormal static deformations of monitored structures caused by extreme static live load (e.g. snow), geinformatics fce ctu 2006 21 applications of the galileo system in civil engineering � detection of excessive vibrations of monitored structures caused by extreme dynamic live load (e.g. swinging of a monitored structure by extreme wind effect, swinging of a footbridge by vandals) or by loss of aerodynamic stability. the first phase of research on the project presumes comparison of the data obtained by the galileo system with the data measured by classic techniques (research workplace: department of geodesy and land consolidation, fce ctu). transition from classic measurements to satellite observations with the advantages of economic effectiveness, improved health and safety and greater reliability of results should be possible after the verification of results. part of the task is also the development of methods for mathematical-statistical processing of obtained data and the development of software support for computations. 6. monitoring of failures and excessive deformations of significant historical buildings, monitoring of deviations and deformations of tall masts, towers and chimneys the aim of this part of the project is to verify the serviceability of the new european galileo positioning system for the monitoring of static and dynamics behaviour of significant (mainly historical) building structures and for the detection of their excessive deformation and dynamic deflections, which usually become a signal of a failure in the bearing structure. the risks of failures or damage are of principal importance e.g. in historically valuable timber structures of spire roofs. in the majority of timber structures, progressive degradation of the material of the main structure occurs (effects of humidity, wood-destroying insects etc.), and during a severe climatic exposure (for spire roofs mainly under wind load) there is a risk of a failure. therefore, application of satellite receivers with millimetre precision would be very beneficial to identify very well the defect of the structure and also give answers to questions if rehabilitation (strengthening) of the bearing structure is necessary. the use of satellite methods is very useful in the case of above mentioned structures because these structures usually exceed in height the surrounding terrain and buildings. any other measurements is in this case are very complicated and expensive, while application of the satellite methods is very favourable, because there is no risk of overshadow of receivers, there is no need for a permanent operator, the measurement can be restricted to only specific periods of time in which the structure is exposed to extreme effects. satellite observations represent one of a few possibilities how to verify the behaviour of tall masts, towers and chimneys and other similar structures exposed to wind effects. for these structures, the limit state of serviceability (so-called second limit state) is usually crucial. here, dynamic effects of wind are mostly the cause which must be monitored both in the direction of acting wind and in the perpendicular direction where mainly cylindrical-shaped structures could be vibrated by resonance and consequently endangered by fatigue damage. wind effects are very difficult to monitor on scale models in wind tunnels where real wind impacts are simulated. measurements with the help of galileo receivers would provide us unique information on the basis of which we could analyze the real wind impacts on aerial structures. with respect to the proportions of these structures, their height-width ratio is very unfavourable for verification in wind tunnels, and monitoring of above mentioned variables on the real structures will entail significant advances in this branch. geinformatics fce ctu 2006 22 applications of the galileo system in civil engineering a further contribution would be a possibility to make measurements on such structures, which bear a considerable number of aerial systems and where there is a tendency to increase their dimensions or quantity. this problem is increasingly common because it is an accompanying phenomenon of the development of mobile communication networks. there is a limited number of tall buildings and structures in built-up areas and usually there is no chance to build new tall structures there, and this results in an increase in effective aerial areas on existing structures. there are some limits to this development and risks of failures under extreme climatic load. that is why, with the help of satellite receivers, the present state behaviour and also the behaviour after increasing the effective aerial area could be monitored on selected structures. in the case of a significant deformation growth, it will be possible to decide if the technology menaces the safety and stability of the structure and possibly how big additional horizontal load will be permissible in terms of maintaining the reliability of the whole system 7. application of the galileo system in landscape and water management engineering (vi) the research workplaces (department of irrigation, drainage and landscape engineering, fce ctu, and department of sanitary and ecological engineering, fce ctu) are going to develop expert and warning systems serving as decision-making support in real time. they will refer, in particular, to the following fields: � water management – monitoring critical situations such as accidents, floods etc. � landscape engineering – mapping and monitoring of valuable landscape elements, nature and landscape conservation � transport and its environmental impact exact measurements with the help of the galileo system will be used for the improvement of horizontal and vertical data on surface attributes and for exact positioning of remote sensing data. possible applications are in areas such as: � exact location of surface attributes of drinking water supply systems and sewage systems in x, y, z coordinates enabling a rapid access to control elements and manholes for operation purposes and rapid failure actions (digitising of the topographic base does not convey the coordinate of height, geodetic measurements are exigent and expensive, surface attributes are often hard to access, especially due to dense traffic) � data for rainfall-runoff models of water quantity and quality (type and size of surfaces in the watershed and their slopes), � data for the assessment of the ecological state of streams, rivers and water-supply reservoirs (their surveying, monitoring of changes in channels, riparian zones and watershed cover, water-level fluctuation) � monitoring of accidents affecting water resources, progress of eutrophication of reservoirs � monitoring of the descent of urban utility networks resulting from surface declination due to mining, etc. both research workplaces plan to provide theoretic outputs – feasibility studies, designs of applications and assessment of the system’s technological demands and, in cooperation with geinformatics fce ctu 2006 23 applications of the galileo system in civil engineering other working groups, definition of technical requirements necessary for the system, as well as outputs in the form of demonstrations of individual applications. 8. application of the galileo satellite system in highway engineering (si) exact location in space and time is becoming an important part of road transport and highway engineering. several applications can be used e.g. in management and heavy vehicle control, easier overload checks, automatic highway toll systems etc. one of the most important challenges in terms of road safety is precise location of overload, oversize and dangerous freight during transport. the advantage of the galileo satellite system can also be used for highway repairs and maintenance. this includes e.g. optimization of winter snow removal as well as routine monitoring, pavement management systems and highway maintenance, including measurements of their flexible and unchangeable variables. the research team at the department of road structures, fce ctu, will proceed with the development of an expert system, which would, on a complex basis, include the above mentioned issues using the galileo system as a principal source of spatial data. special attention will be focused on ongoing programs in canada and south-eastern asia where gps is already used. however, the system proposed by the fce will employ the latest technologies of the new positioning systems, including communications between the user / vehicle and the central operating system after detecting a crisis situation and reacting to this situation. 9. applications of the galileo system in railway engineering (žs) the research workplace (department of railway structures, fce ctu) will be engaged in the development of applications of the galileo system for long-term monitoring of spatial displacements of selected sections of tram and railway tracks. in this case, the galileo system will replace technologies based on terrestrial measurements using a technology based on satellite observations resulting in greater cost-effectiveness and health and safety. monitoring of displacements of tracks is necessary in cases when the tracks are built with new structural elements, on high-speed tracks and tracks endangered e.g. by slope movements resulting from human activity (i.e. technogeneous movements e.g. due to undermining etc.) the objective of automated monitoring is to ensure greater safety of rail-borne vehicle operation. 10. combination of the laser scanning method and the galileo system (ls) the research workplace (department of special geodesy, fce ctu) deals with modern methods of the monitoring and documentation of constructions by laser scanning systems. it is a highly effective and modern method, which allows mass collection of spatial data and its post processing and visualization. the most important device of the whole system – a laser scanner produces coordinates in the local coordinate system and so the results of the measurement have to be transformed into the national reference coordinate system or into an alternative well defined suitable coordinate system. to perform the transformation accurately and effectively, it appears to be suitable to develop a technology, which combines the laser scanning measurement and observation of control points with use of the methods of the european space positioning system galileo. the proposed task comprises: � research of possibilities of mass data collection with the use of laser scanning systems for construction documentation purposes, � research of possibilities of mass data collection with the use of laser scanning systems for geinformatics fce ctu 2006 24 applications of the galileo system in civil engineering the purposes of deformation analysis of constructions and other structures of interest, � processing, visualization and presentation of data acquired by laser scanning systems, � research of possibilities of interconnecting the results of terrestrial laser scanning and the results of the galileo system’s observations, � comparison and linking of terrestrial geodetic technologies and modern satellite positioning methods with a view to the european galileo system, � research of possibilities of data acquisition for the purposes of facility management, � usage of 3d models in city information systems (mis). 11. application of the galileo system in experimental geotechnics (eg) the state of rock or soil environment and its possible impact on the structure or on existing objects is evaluated during geotechnical surveys. the frequent task is to find out discontinuities, weakened zones, voids, cavities and other underground in-homogeneities. all this is provided by georadar surveys, best in combination with other methods (superficial refraction seismicity, micro-gravimetry). the usage of georadars allows to obtain invaluable information about the geological structure near the construction route, e.g. about the depth and relief of underlaying rock, lithological changes, occurrence of disturbance zones etc. it enables the localization of utility networks and underground objects. based on the results of georadar surveys it is possible to optimise the construction route, to classify earthwork, to determine the most suitable technology, to determine the range of blasting work, to evaluate the stability of the area or the possible impact of the construction on the surroundings. the georadar has its use both during construction and after its finishing (e.g. in the assessment of compaction quality). its indisputable advantage is that it allows gaining high-density measured data. while the distance between survey drills is often several hundred meters, the distance, when using the georadar is only several decimetres. the usage of the georadar in connection with the galileo system enables gaining exactly localised outputs of this survey by linkage to the coordinate system. it will be possible to create a 3d automated continual database of underlaying rock of extensive areas by combining a non-destructive method of geotechnical surveying and the galileo system. 12. evaluation of strain risks in structure-rock massif interaction (gt) research in this area encompasses investigation of historical buildings (castles – kunětická hora), landfills, spoil heaps (palivový kombinát úst́ı nad labem rabenov), dumps, undermined localities and underground structures (research underground laboratory, gallery josef, mokrsko), slope deformations (čertovka – úst́ı nad labem – vaňov), traffic structures (highways), hydrotechnical structures (rockfill dams etc.), industrial structures and engineering structures (bridges) exposed to natural and anthropogeneous deformations of the rock massif. following archive study, a choice of sensitive localities in terms of existing movements will be prepared (slope deformations, erosion, flood-related transport and accumulation, sensitive subsoil types, human activities). after shortlisting the number of structures whose investigation is feasible taking into account the time available and personal limits, the structures of interest will be instrumented and monitored. results will be related to the measurements geinformatics fce ctu 2006 25 applications of the galileo system in civil engineering carried out on the rock massif, or in tunnels and drill holes. finally, conclusions will be formulated evaluating potential use of analogical measurements and monitoring of structures. 13. web services design for geoinformatics (ws) the research is targeted at the design of a general object-oriented system for geoinformatics, with the objective of integrating various heterogeneous geoinformation services under a unified user interface. the interface for each service will be described in xml based on wsdl language. a registered user will be able to access results of submitted tasks, the geoinformation system itself will be written in java language and based on servlets and the database handler jdbc. privileged users will be allowed to register their own services in the system. 14. referencing of state largeand medium-scale map series by using the galileo system (ma) the use of observations from the satellite positioning system must go hand in hand with entering the results in the existing topographic base. on the territory of the czech republic there is a number of state map series produced in a total of four coordinate systems and three cartographic projections with an unusually diverse range of sheet marking and sheet line systems. to meet the needs of digital cartography, international exchange of cartographic data and generally all modern technical applications, seamless maps (i.e. a map as a whole not a set of map sheets) must be introduced in a projection and a coordinate system chosen by the user. problems arise e.g. due to the fact that base maps are produced by scanning and the contacts of neighbouring sheets are not aligned and, besides, they have irregular shrinkage. the research workplace (department of mapping and cartography, fce ctu) will work on the development of tools for the conversion of the existing topographic base into a uniform (electronic) format so that it may be used jointly with the galileo satellite positioning system. 15. application of the galileo system for enhanced effectiveness of real estate register management (kn) technical applications of the positioning system in civil engineering rely on flawless functioning of the new real estate register information system (iskn) and on rapid provision of current data for its graphic section (sgi). in this respect, the problems to be solved are creation of fast outputs in the form of thematic maps for crisis management at the time of natural disasters or similar events. localization of selected elements within the territory by means of the galileo system can be conveniently combined with image data (colour orthophotos) to specify and monitor land use by means of the iacs system, based on the eu requirements and regulations for providing subsidies for agricultural production. 16. application of the galileo system in photogrammetry (fg) the laboratory of photogrammetry deals with a number of tasks related to precise positioning. the laboratory activity has, in the long run, been focused on documentation and presentation of historical monuments with the aim of creating an extensive virtual database of listed monuments on the internet. the first successful attempts in this area have been achieved http://lfgm.fsv.cvut.cz, the prototype of a database of minor historical monuments photopa is in functional condition. this database, in particular, should be complemented by options of making animations and virtual reconstruction of monuments, and it should be made accessible to general public in the form of a simplified database for virtual viewing geinformatics fce ctu 2006 26 http://lfgm.fsv.cvut.cz applications of the galileo system in civil engineering and tourism. because of storing extensive amounts of data in the gis environment which must be localized, a definition point or points must be specified for each object. up to now, the technique used for this activity was measuring of position from a detailed map or hiking gps. another presumed system level will be visualization of objects based on their position on the map. here, the galileo system can be applied in connection with a digital map. a similar approach may be used for long-term investigation of geoglyphs and petroglyphs in peru, which has been running for several years in cooperation with htw dresden and inc peru. here, the methods of precise gps must be used for documentation purposes in extreme desert and mountainous conditions. another research activity presumes creation of a system for multi-purpose navigation through cultural monuments. here, the digital topographic base can be used complemented by additional useful information. in the last several years, laser scanning has become a new technology of documentation and mass collection of 3d points from the surroundings. at the same time, dynamic methods have appeared applying the gps system complemented by an inertial positioning system. inertial unit-fitted systems belong to hi-tech devices and are used in aerial applications. terrestrial dynamic systems are still in the development stage. the research objective in this case will be creation of a dynamic system for surveying terrestrial line areas mounted on a car of rail-borne vehicle. the system’s core will be a galileo receiver and a laser measuring head. the presumed result of the measurement will be a 3d model of the nearest surroundings of the passed section with a multi-purpose use. geinformatics fce ctu 2006 27 ________________________________________________________________________________ geoinformatics ctu fce 364 information management systems for cultural heritage and conservation of world heritage sites. the silk roads case study ona vileikis1, mario santana quintero1, koen van balen1, barbara dumont2, vincent tigny2 1 raymond lemaire international centre for conservation (rlicc), katholieke universiteit leuven (k.u.leuven) kasteelpark arenberg 1 b 2431, 3001 heverlee, belgium. ona.vileikis@asro.kuleuven.be 2gim nv, researchpark haasrode 1505, interleuvenlaan 5, b 3001 heverlee, belgium. keywords: documentation, world heritage, information management systems, serial nomination. abstract: this paper discusses the application of information management systems (ims) in cultural heritage. ims offer a set of tools for understanding, inventorying and documenting national, regional and world heritage properties. information management systems can assist state parties, stakeholders and heritage site managers involved in cultural heritage management and conservation by ea sily mining, sharing and exchanging information from multiple sources based on international standards. moreover, they aim to record, manage, visualize, analyze and disseminate heritage information. in close collaboration with five central asian countries, namely, turkmenistan, kazakhstan, kyrgyzstan, uzbekistan and tajikistan; a belgian consortium headed by the raymond lemaire international centre for conservation (rlicc), k.u.leuven is developing the silk roads cultural heritage resource information system (chris). this web-based information management system supports the preparation of the central asia silk roads serial and transnational nominations on the unesco world heritage list. the project has been set up thanks to the financial support of the belgian federal science policy office (belspo) and in collaboration with unesco world heritage centre in conjunction with the people’s republic of china and the japanese funds-in-trust unesco project. it provides a holistic approach for the recording, documenta tion, protection and monitoring tasks as part of the management of these potential world heritage properties. the silk roads chris is easily accessible to the general user, presented in a bilingual english and russian frame and interoperable, i.e. open for other applications to connect to. in this way, all information for the nomination dossiers is easily verified regarding consistency and quality and ready for managing, periodic reporting and monitoring processes in the respect to the property listed. fina lly, this study provides a general framework to establish the effectiveness and limits of the use of information systems for serial transnational nominations of world heritage properties and to demonstrate the potentials of an improved heritage documentation system. 1. introduction serial transnational world heritage nominations are an opportunity to reach a more balanced world heritage list, supporting the underrepresented regions and fostering international research. however, these nominations as well as the risks affecting the integrity of the properties are becoming an increasingly complex challenge for the authorities managing heritage places at different scales. therefore, the use of information management systems (ims) is an opportunity to elaborate an accurate and reliable nomination dossier and allow exchange among the state parties involved. world heritage properties are confronted to a number of threats due to e.g. tourism, development, deterioration processes and climate change. adequate monitoring of management measures should be proposed and include tracking physical changes of the monuments, controlling visitors’ carrying capacity, and clearly defining the boundaries and buffer zones among other activities. several charters and international conventions such as the world heritage convention, the venice charter, the burra charter and many others aim to guide site managers and protect these sites. however, when looking at serial transnational nominations, the issues mentioned before become more ________________________________________________________________________________ geoinformatics ctu fce 365 complex and require better coordination and more collaboration among state parties. therefore, the different stakeholders involved in world heritage nominations need to find a way to improve collaboration and better understand their heritage places by easily mining, sharing and exchanging information from multiple sources using common standards. following unesco’s recommendations [7] in the use of digital technology, the application of ims in cultural heritage is a cost-effective tool to easily understand, store, share, manage and monitor serial transnational world heritage nominations. an ims is able to support these kinds of nominations in the conservation and monitoring processes by defining standards, storing large data volume and acting as a collaborative platform for the state parties to share information and take informed decisions. the case study of the silk roads cultural heritage resource information management system (chris) will illustrate the implementation of ims on the elaboration of world heritage nomination dossiers. 1.1. serial transnational world heritage nominations a major purpose of the world heritage convention is to foster international cooperation for the protection and management of the outstanding cultural and natural heritage sites. the concept of serial nominations is an innovative approach supporting this aim. serial nominations are submitted as single properties with one outstanding universal value, and their nomination, monitoring and management must be treated as such [1]. within this category and looking forward to a more representative, balanced and credible world heritage list, serial transnational nominations are an opportunity to encourage collaboration and exchange between state parties. however, these nominations come together with more complex challenges specifically in the systematization and management of the information. unesco world heritage centre and icomos (international council of monuments and sites) have encouraged the state parties to identify and nominate new and underrepresented categories of properties, such as cultural routes, settlements and cultural landscapes. some successful serial transnational inscriptions in cultural heritage are the frontiers of the roman empire, and the belfries of belgium and france. however, only two cultural routes, namely the camino real de san agustin and the route of santiago de compostela, are listed while no transnational route has yet been inscribed [2]. therefore, current initiatives such as the silk roads serial transnational nominations are of great interest to the world heritage community. on the one hand, serial nominations create the opportunity for single heritage sites otherwise not on the unesco list, to be proposed and protected under a larger framework. in the case of serial transnational nominations, the component parts are protected by a management system at an international level. these types of nominations not only link cultures, regions and communities but are also an instrument for sustainable development and tourism at transnational, national and regional scale. moreover, these nominations bring diversity in models and management strategies for the conservation of the sites [3,4]. on the other hand, these large nominations entail significantly higher efforts than a single nomination and are more complex when it comes to operability and legal issues. first, the preparation of serial transnational world heritage nomination dossiers requires more resources, guidance and active joint collaboration. some problems and issues on the elaboration of these nomination dossiers identified and discussed during international meetings [5,6] were that: (1) large volume of baseline information is required; (2) there is no specific system to support the elaboration of complex nomination dossiers; (3) overlapping and duplicate data may be collected especially at a transnational level due to different documentation procedures, geographic references, standards in conservation strategies and national policies; (4) there is a need for exchange and joint use of data; (5) there is a need for decisions based on critical thinking and the latter should not be limited to decisions by authorities without sufficient access to the appropriate expertise; (6) there is a lack of technical expertise e.g. gis, documentation and recording techniques. moreover, in transnational nominations each state party presents differences regarding social, political and legal characteristics making the coordination of management systems more complex. some solutions may vary from bilateral agreements to more elaborated management partnerships. a joint management system should be proposed with a shared vision for the protection of the ouv. however, although all the property is covered under one umbrella, the world heritage convention and other international policies, each component part should comprise a set of objectives to later measure the state of conservation and changes over time [3,5]. 1.2 the role of information management systems in cultural heritage adequate design and implementation of information management systems allow a cycle process supporting cultural heritage activities such as documentation, inventorying, management strategies, monitoring, and reporting. as it needs to be regularly updated, various stakeholders and the local community can be involved. although ims are an opportunity in cultural heritage, they should be adapted depending on the national legislation, regional policies, and the local needs. as required by the world heritage convention a digital repository is essential for the inclusion of the sites on the list. moreover, in serial properties with such a large volume of information and number of stakeholders, heritage managers need a better understanding of the properties in order to take informed decisions. ims aim to overcome the ________________________________________________________________________________ geoinformatics ctu fce 366 problems and issues on serial transnational nominations by (1) creating a common platform to receive unlimited amount of information with secure and restricted access through the use of user profiles and access rights management, (2) allowing an analysis and comparison of the heritage places being the base for e.g. prioritizing development interventions or planning, redefinition of uses of the properties, or their interpretation, and (3) presenting consistent and reliable information. nevertheless, most of the systems fail because of lack of data or because the end-users are not involved during the development and interpretation process. often a bottom-up approach is not considered. ims need to be adapted to the current legal circumstances and way of carrying out data repositories in each country, in order to easily input the information in the system. moreover, heritage managers should have prompt access to the information being them the ones bridging the decision makers with the local community in order to protect the integrity and authenticity of the sites. the appropriate use of ims in cultural heritage will have a positive outcome in the way of inventorying and decision-making of large heritage places. their functions and tools allow different levels of engagement and open the opportunity to shift from a top-down to a bottom-up approach. however, each system should be tailor-made according to its purposes and users requirements. 2. the silk roads and its nomination nowadays, the silk roads nomination, with around 35,000 km of major routes of dialogue and exchange connecting east and west [7], it involves 12 state parties and comprises more than 160 heritage sites already listed on the tentative list. as agreed on during the second meeting of the coordinating committee on the serial world heritage nomination of the silk roads in ashgabat, turkmenistan, 50 potential cultural corridors were identified by an icomos thematic study as a potential collection of serial and serial transnational properties all together defining the silk roads nomination overarching framework. this new nomination strategy aims to foster inter-state cooperation; allowing state parties to act in smaller “groups” more independently, and to propose more manageable properties to be nominated on the world heritage list. however, despite of this new approach, the elaboration and documentation of these serial transnational nomination dossiers in central asia still poses a number of problems and issues: (1) the large extent of the considered territory; (2) some of the inventory systems, e.g. monument passport as developed and used in the soviet times, are standardized but are only available in hard copies and not much information is digitized; (3) decision-makers are hardly acquainted with digital technology; (4) the lack of technical expertise in documentation and recording as well as monitoring of cultural heritage in all central asian countries except for kazakhstan [8,9]; (5) transnational collaboration is clear on paper but not in practice; (6) in some countries there is a lack of integrated management plans or systems and of conservation strategies [6,9]. in dealing with these issues, the use of digital technology is a recommended cost-effective tool for better understanding and protection of serial transnational world heritage properties [10]. information management systems (ims) easily allow storing, managing, and monitoring the baseline information during the elaboration of the nomination dossiers as well as after the properties are listed. moreover, ims ensure data integrity and improve quality control. within this framework, unesco world heritage centre has allocated resources for underrepresented categories of properties and geographical regions to be technically supported during the nomination procedures in order to both, ensure the balance and credibility of the world heritage list, and avoid the resubmission of nominations with incomplete files [11]. the silk roads serial transnational world heritage nomination initiative is one of them. for the first phase of the silk roads serial transnational world heritage nomination, two priority cultural corridors in central asia and one in china were selected. to carry out this task, two projects will be working in conjunction, the japanese funds-in-trust unesco project dealing with the support for documentation standards and procedures of the silk roads serial and transnational world heritage nomination in central asia, and the silk roads cultural heritage resources information system (chris) project. the silk roads chris project is an unesco whc initiative funded by the belgian federal science policy office (belspo) and executed by a belgian consortium headed by the raymond lemaire international centre for conservation at the k.u.leuven supporting the silk roads serial transnational world heritage nomination of the five central asian countries, namely, the republics of kazakhstan, kyrgyzstan, tajikistan, turkmenistan and uzbekistan. it aims to overcome the nomination issues based on the central asian user requirements by: (1) improving and automatizing the documentation processes, (2) cross-referencing data at national and transnational level with international standards, (3) developing and implementing an effective web-based license free multilingual platform for information recording and sharing, (4) developing participatory editing tools with protected access rights, and (5) focusing on a thin-client approach for which no additional plug-ins are needed. moreover, this user-friendly system will not require long trainings or specific qualifications for its use. the first demonstrator will be presented at the first documentation training in uzbekistan to be evaluated by the central asian users in september 2011. ________________________________________________________________________________ geoinformatics ctu fce 367 3. the silk roads chris approach the silk roads chris is a web-based collaborative platform in development that is user-oriented and based on the operational guidelines on the world heritage convention [1] and the approach of the unesco chair in preventive conservation, maintenance and monitoring of the monuments and sites (precomos) [12]. the system is accessible through standard internet web browser allowing the implementation of a large number of actions and tools. it is based on “gim geo cms” which is a combination of a content management system with a webgis application; and will be an evolution of the heritech ims developed by gim for the heritage managers of the city of biograd na moru in croatia thanks to the support of the flemish government. this system shares the concepts of the calakmul 4dgis [13] developed in collaboration with unesco and with the financial support of belspo but is based on leading-edge technology. the silk roads chris uses the geographical location of features of interest as common denominator for all the information stored in the system. this means that the user can navigate in the territory and interactively select (on the map or via a query) an heritage feature being represented as a point or as a detailed polygon in order to get linked information (textual description, associated pictures, documents, 3d models, videos, etc) illustrating the ouv of the property. users with the required privileges can also edit and modify the information on line. tailored maps can also be generated by overlaying a selection of available layers and then exported in usual formats to illustrate the nomination dossier. finally the temporal dimension is also integrated and provides for additional monitoring and analysis capabilities. all this information comes together with metadata compliant with international standards such as iso or dublin core. the information structure will be based on the current inventory system of the central asian monument passport and the variety of typologies identified in the region. in addition, the information is structured in a way that users, as a function of the groups to which they belong, will only have access to a set of counters providing for a set of relevant data layers and tools. each counter focus on a specific purpose and the system offer full flexibility regarding the number of counters and functionalities available within each of them. initially the silk roads chris will provide a bilingual english-russian integrated framework. the current methodology proposed by precomos is the application of preventive conservation to cultural heritage [14]. preventive conservation aims to avoid or mitigate the damages, understand the risks, and help to dilute responsibility. it also promotes maintenance as a preservation strategy based on proper monitoring. figure 1 depicts the four phases of this process. each one of them corresponds respectively to (1) search for significant or baseline information, (2) identification of missing information (3) choice of measures and answers to be taken, and (4) control of the efficiency of the actions or mitigations proposed. this approach includes systematic monitoring and maintenance supporting the conservation planning by looking back over and over again. figure 1: scheme icomos charter – principles for the analysis, conservation and structural restoration of architectural heritage [14]. the concept previously introduced is integrated in the three major components of every information system, namely, data input, data processing and data output in order to better monitor and manage the property after it has been listed. figure 2 illustrates this result as it is proposed for the silk roads chris approach. afterwards these three features will be explained. first, the data input is the result of the analysis correlated with the different detailed and spatial scales. in the case of the silk roads, the analysis includes the baseline information requested for the nomination by the operational guidelines, e.g. geographical coordinates, historical background, risks and threats [1], the user requirements of the five central asian state parties and the outcome of the state parties interaction in the system as a transnational group. after having populated the system with data required for the nomination dossier, the state parties can interact, e.g. they can define and adapt the buffer zone of the suggested corridor, and add or delete proposed component parts together with the associated information. all this information is visualized on top of background satellite images together with interpretation of processed images in gis as geographical baseline documentation. ________________________________________________________________________________ geoinformatics ctu fce 368 however, if no satellite images are available other image sources will be used such as orthophotos at a large scale. moreover, the system will focus on three spatial scales for the geospatial information to be included: the (1) silk roads (scale 1:400.000/ 1:600.000), the (2) corridors (scale 1:100.000/1:200.000) and the (3) component parts (scale 1:10.000/1:20.000) while offering unlimited in and out zooming capabilities. furthermore, for the elaboration of nomination dossiers the baseline information can be obtained at a reconnaissance scale of heritage documentation. this scale allows a rapid assessment of the heritage place and its setting, giving an overview and understanding of the characteristics of the property and its component parts, identifying its significant elements, risks and management issues [15]. however, in order to effectively monitor and manage the property more detailed levels of documentation should be provided. second, the data process includes both the (2) diagnosis and the answer to the diagnosis, meaning reviewing the information included in the system and reporting what kind of information is missing, and (3) giving a therapy or mitigation to these issues, e.g. what would be the next step to follow, either go back and look for more information or be ready and move forward to the output data. finally, the data output is the result when the information is ready to be used for the nomination dossier and the diagnosis is positive. here, (4) control or monitoring procedures will be applied after the property is listed to assist the periodic reporting. figure 2: approach of the silk roads chris 4. outcomes silk roads chris, although still in its early stages, is an opportunity for the central asian state parties to virtually exchange and work together on the serial world heritage nominations, overcoming physical boundaries. this flexible information management system will be able to hold texts, images, and 3d models moving from paper-based inventories to automated processes ensuring consistency of information, same structure for the submission on the world heritage list as well as for the evaluation of the advisory bodies, the soc and periodic reporting. moreover, working together with the world heritage centre, the possibility to dynamically link the whc database storing the data of the tentative list with the silk roads chris through the use of open archives initiative protocol for metadata harvesting (oai-pmh) is explored. however, some of the issues to overcome will be the different geographic projects and how to control quality and relevance of the information uploaded to the system. 5. conclusion serial transnational world heritage nominations are complex initiatives that have international attention as they contribute to the core objective of the convention by promoting international collaboration. their efficient documentation, management and monitoring are a vast challenge for the state parties and stakeholders involved. the large volume of information involved requires better and more effective inventorying and sharing systems being in digital technology ims as optimal tools to collect and process this data. the silk roads cultural heritage resource information system (chris) is being developed as a flexible collaborative platform supporting the interaction of several state parties aiming to elaborate nomination dossiers. with minor adaptations it could also be used for other serial transnational world heritage nominations of this kind. even if the use of the system does not guarantee the inclusion of the property on the world heritage list it is undoubtedly an asset as it demonstrates management capabilities. its main aim is to assist the preparation of the nomination dossier and later monitoring of properties and managing and updating of this information, by offering a common virtual space to the state parties involved to cooperate and work together and developing standardized baseline data. in a second phase, the silk roads chris will develop heritage tools for the monitoring and management of the properties after the property is listed. as the ims is a flexible and unlimited tool new applications such as the use of a timeline and the reporting and evaluation processes should be further explored. data input data process data output analysis + scales of detail + spatial scales (1) diagnosis(2) therapy(3) control(4) therapy(3) ________________________________________________________________________________ geoinformatics ctu fce 369 figure 3: silk roads chris information management system. powered by gim. http://www.silkroad-infosystem.org [copyright: silk roads chris project] 6. references [1] unesco, operational guidelines for the implementation of the world heritage convention. un doc whc. 08/01 january 2008. 2008. [2] unesco world heritage centre, world heritage centre world heritage list. [online]. available: http://whc.unesco.org/en/list. [accessed: 11-mar-2011]. [3] iucn, engels b., koch p., and badman t., serial natural world heritage properties. an initial analysis of the serial natural properties on the world heritage list iucn . [4] b. engels, serial natural heritage sites: a model to enhance diversity of world heritage?, in world heritage and cultural diversity, vol. 4, cottbus: german commission for unesco, 2010, pp. 79-84. [5] swiss federal office of culture, o. martin, and s. gendre, eds., unesco world heritage: serial properties and nominations. swiss federal office of culture, 2010. [6] magin c., world heritage thematic study for central asia a regional overview. 2005. [7] jing f., unesco’s efforts in identifying the world heritage significance of the silk road, in proceedings of the icomos 15th general assembly and scientific symposium. xi’an, 2005, vol. 2, pp. 934-944. [8] fodde e., conserving sites on the central asian silk roads: the case of otrar tobe, kazakhstan, conservation and management of archaeological sites, vol. 8, no. 2, p. 77, 2006. [9] fodde e., conservation and conflict in the central asian silk roads. journal of architectural conservation, vol. 16, no. 1, p. 75, 2010. [10] unesco, guidelines for the preparation of serial nominations to the world heritage list. n.d. [11] cleere h., denyer s., and petzet m., the world heritage list : filling the gaps, an action plan for the future. paris: icomos, 2005. [12] rlicc, precomos preventive conservation, maintenance and monitoring of monuments and sites [online]. available: http://precomos.org/index.php/member/login/. [accessed: 07-jun-2011]. [13] van ruymbeke m., tigny v., de badts e., garcia-moreno r., and billen r., development and use of a 4d gis to support the conservation of the calakmul site (mexico, world heritage programme), in proceedings of the 14th international conference on virtual systems and multimedia, limassol, cyprus, 2008. [14] icomos, icomos charterprinciples for the analysis, conservation and structural restoration of architectural heritage. icomos, 2003. [15] letellier r., ed., recording, documentation, and information management for the conservation of heritage places, the getty conservation institute. los angeles: j. paul getty trust, 2007. http://www.silkroad-infosystem.org/ quality parameters of digital aerial survey and airborne laser scanning covering the entire area of the czech republic jiří šíma novorossijská 18, praha 10, czech republic jirka.sima@quick.cz abstract the paper illustrates the development of digital aerial survey and digital elevation models covering the entire area of the czech republic at the beginning of 21st century. it also presents some results of systematic investigation of their quality parameters reached by the author in cooperation with department of geomatics at the faculty of applied sciences of the university of western bohemia in pilsen and the land survey office. keywords: digital aerial survey, orthophoto imagery, aerial laser scanning, digital elevation model, digital surface model, czech republic 1. introduction year 2010 became a turning point as far as gathering of up-to-date geospatial data from the whole territory of the czech republic concerned. aerial survey with digital cameras in rgb and nir spectral bands with on-the-ground resolution of 0.20 m covered 100 % of the state territory in the course of 2010-12. periodically repeated digital image records serve as a database for photogrammetric processing – preferentially of digital colour orthoimagery of the czech republic with on-the-ground resolution of 0.25 m. the era of taking aerial photographs on film and their subsequent rastering by means of precise photogrammetric scanners was de facto closed for this purpose. at the same time aerial laser scanning of 68.4 % of the state territory was successfully realized within the project of new hypsometry of the czech republic [1]. its main goal is to provide bodies of state administration with high resolution elevation data in form of digital elevation model and digital surface model of the entire state territory by 2015. new models of hypsometry will fulfil all requirements of the inspire project as well. the czech office for surveying, mapping and cadastre, ministry of defence and ministry of agriculture are investors of both projects. land survey office and military geographical and hydrometeorological office are main compilers of digital image and lidar data. the author and department of geomatics at the faculty of applied sciences of the university of west bohemia in pilsen have collaborated in definition and evaluation of quality parameters of all resulting products from the very beginning of development of all above mentioned projects. 2. development of aerial survey for production of the cz orthophoto in order to reach complete coverage of the czech republic with aerial photographs or digital images in 3year period total area of 78 865 km2 was divided into 3 zones (west, central, east) in 2003 respecting the layout of the state map series in scale 1 : 5000 (fig.1). from 2003 to geoinformatics fce ctu 10, 2013 15 šíma, j.: quality parameters of digital aerial survey . . . 2008 wide angle aerial photographs, mostly in scale 1 : 23 000 from relative flight height 3500 m have been exposed on colour negative film and transformed into raster form by precise photogrammetric scanners. such parameters allowed to produce digital colour orthophoto imagery with pixel size 0.50 m on the ground. since 2009 orthophoto imagery with higher resolution (0.25 m on the ground) has been required and other survey flight parameters photo scale 1 : 18 000 and lower flight height 2740 m have to be chosen. figure 1: schedule of aerial survey and airborne laser scanning of the czech republic since 2010 aerial photography on a film has been replaced by digital aerial survey using digital large format metric cameras for simultaneous registration of images in panchro, r,g,b and nir spectral bands. minimum size of a sensor element 6 micrometers allowed to reach economically the on-the-ground resolution between 20 and 25 cm of the pixel size. all private firms taking part in tenders for digital aerial survey have used vexcel ultracam x or xp cameras. table 1 shows a complete review of methods used for aerial survey of the entire area of the czech republic [4]. digital aerial survey covered the central zone in 2010 and the zone-west in 2011. the rest of state territory (22.8 % almost 28 thousand km2 in zone-east) was imaged in 2012 commonly with one half of the central zone, because there is an intention to reduce recent 3-year period into 2-year period since 2012. (fig. 2). all projects of aerial surveys have been funding by the czech office for surveying, mapping and cadastre and the ministry of agriculture. the main product of periodical aerial surveys – the cz orthophoto (ortofoto čr in czech) serves first of all for needs of state authorities and public administration. some important examples of its application should be introduced here: • updating of the land parcel information system; orthophoto is documenting the areas recently cultivated by farmers when they ask for financial support from the funds of geoinformatics fce ctu 10, 2013 16 šíma, j.: quality parameters of digital aerial survey . . . year method of aerial survey and rastering average scale ofphotographs / images on-the ground pixel size (pixel of cz orthophoto) 2003-2008 analogue, on colour film + scanning into the raster form 1 : 23 000 0.46 – 0.48 m(0.50 m) 2009 analogue, on colour film + scanning into the raster form 1 : 18 000 0.27 m(0.25 m) 2010 digital (pan, r, g, b, nir) direct raster registering – uc xp 1 : 32 000 0.19 m(0.25 m) 2011 digital (pan, r, g, b, nir) direct raster registering – uc x, xp 1 : 35 000 0.21 – 0.25 m(0.25 m) 2012 digital (pan, r, g, b, nir) direct raster registering – uc xp 1 : 36 000 0.22 m(0.25 m) table 1: recent development of aerial survey parameters for the cz orthophoto production european union, • updating of the fundamental base of geographic data (known under acronym zabaged ®) that is a topological-vectorial spatial data base at level of 1 : 10 000 base map of the czech republic, • updating of a military digital landscape model 25, a similar database at level of 1 : 25 000 military topographic map, • providing the infrastructure for spatial information in europe (inspire) with recent orthophoto imagery of the czech republic. • as an substantial part of the digital map of public administration. figure 2: digital aerial survey of the czech republic in two-year interval since 2012 geoinformatics fce ctu 10, 2013 17 šíma, j.: quality parameters of digital aerial survey . . . 3. quality parameters of the cz orthophoto the quality of the cz orthophoto enables to use this product to some other tasks [6]: • discovering some discrepancies, gross and systematic positional errors in planimetric representation of objects in digitized cadastral maps (see fig. 3), • colour infrared orthophotos are widely used to national inventory of forests. from 2004 the laboratory of digital photogrammetry at the faculty of applied sciences in pilsen, and later the author in collaboration with department of land surveying of the land survey office have been continuously assessing the absolute positional accuracy of the cz orthophoto [4] using more test fields containing hundreds of check points measured mostly by gps real time kinematic method with positional accuracy better than 10 cm, and identical points well identified in the orthophoto. remarkable seems to be high absolute positional accuracy of orthophoto from digital images in spite of their distinctly lesser scale [5] [8] [9] (see table 2). method of image number cy cx my mx mxy ∆ymax ∆xmax data registering of points [m] [m] [m] [m] [m] [m] [m] colour film 1 : 16 650 (2008) raster scanning pixel size 20 µm 732 -0.17 0.08 0.36 0.33 0.35 1.88 1.67 digital images 1 : 32 000 (2010) pixel 6 or 7.2 µm 430 0.05 -0.03 0.14 0.16 0.15 0.55 0.60 digital images 1 : 35 000 (2011) pixel size 6 µm 301 0.02 0.07 0.21 0.24 0.23 0.63 0.89 digital images 1 : 36 000 (2012) pixel size 6 µm 90 .0,04 -0.05 0.20 0.23 0.22 0.48 0.53 explanations: c . . . systematic error m root mean square error ∆max maximum error table 2: results of testing the absolute positional accuracy of the cz orthophoto following check point types have been chosen for evaluation of absolute positional accuracy of the cz orthophoto: • visible on-the-ground corners of a building • foot of a pylon, telegraph pole or lamppost • on-the-ground corner of a fence, underpinning or masonry wall • midpoint of a circular manhole, drainage and well cover, • corner of a kerb or road surface • x or t form of white line intersections on a road surface geoinformatics fce ctu 10, 2013 18 šíma, j.: quality parameters of digital aerial survey . . . figure 3: original cadastral survey of a local road 50 years ago and reality as shown in the cz orthophoto high absolute positional accuracy of the cz orthophoto made from digital aerial images in 2010-12 (see tab. 3) has been reached thanks to compliance with keeping of three principles: 1. targeting of optimally distributed ground control points within the block of aerial images; predominantly of monumented triangulation points having known and accurate absolute position defined in the compound coordinate reference system (s-jtsk, baltpo vyrovnání). their number should not be less than 1 per 25 digital images, e.g. 40 gcp within the block of 1000 images, 2. on-the-board registration of elements of exterior orientation by gps/imu apparatus that makes the spatial geometry of a block stronger, 3. modern and effective software for digital automatic aerotriangulation (aat) that enables quickly change the input parameters and repeat the computation (match-at version 5.2.1 or 5.4.2 has been used in two processing centres). 4. development of digital elevation models in the czech republic until the year 2000 the czech republic was completely covered with hypsography based on graphical contour lines with two-metre interval represented in the base map of the czech republic in scale 1 : 10 000 (fig. 4). in the course of establishing the fundamental base of geographic data (zabaged®) in topological-vectorial form, above mentioned hypsography has been digitized into a 3d model forming the triagulated irregular network called geoinformatics fce ctu 10, 2013 19 šíma, j.: quality parameters of digital aerial survey . . . year number of aat blocks rmse of residuals on ground number of aat blocks rmse of residuals on check control points use for aat control points not use for aat mx aat my aat mh aat mx my mh [m] [m] [m] [m] [m] [m] 2010 17 0.113 0.100 0.200 12 0.160 0.140 0.300 2011 17 0.089 0.080 0.216 11 0.111 0.104 0.268 2012 23 0.067 0.074 0.137 22 0.184 0.173 0.256 table 3: results of testing the accuracy of digital automatic aerotriangulation zabaged®-výškopis 3d vrstevnice. another digital elevation model (dem) in the form of 10 x 10 m grid called zabaged® 10x10 m grid was derived from the previous one.typical elevation accuracy of those models is: 0.7 1.5 m in open terrain without continuous vegetation cover, 1 2 m in built-up areas and 2 5 m in forests (accidentaly up to 20 m in the mountains). figure 4: original contour lines in the base map 1 : 10 000 and their tin representation after vectorizing more demanding requirements on elevation accuracy for production of orthophotos with high on-the-ground resolution (25 cm and better) and frequent demands for 3d modelling of terrain ground and surface iniciated generation of the project of new hypsometry of the czech republic within the period from 2009 to 2015. as the most effective method the airborne laser scanning (als) was chosen for this purpose [1] [7]. geoinformatics fce ctu 10, 2013 20 šíma, j.: quality parameters of digital aerial survey . . . figure 5: laser scanning system litemapper on the board of l-410 fg aircraft in contrast with gathering the aerial photos and digital images by czech or foreign private firms only, the airborne laser scanning has been accomplished by three state administration authorities: ministry of defence, ministry of agriculture and the czech office for surveying, mapping and cadastre. their tasks have been distributed according to disponible capacities. army of the czech republic dispones with a special photogrammetric aircraft l-410 fg of the czech production suitable for both airborne laser scanning and digital aerial survey as well. usual speed of a survey flight or airborne laser scanning is 250 km per hour. the flight hights range from 1200 m to 3200 m. in case of airborne laser scanning there is a laser scanning system litemapper hired out by german firm igi installed on the board of l-410 fg (fig.5). parts of that system are: laser scanner riegl lms q680, gps novatel equipment and imu produced by the firm igi. fig. 6 illustrates state of airborne laser scanning at the end of 2011. in 2012 the flights with l-410 fg had to be frozen through the necessity of its general overhaul. individual blocks for laser scanning and data processing occupy the area from 10 x 10 km up to 10 x 40 km depending on maximum difference of elevations inside. the flight lines are parallel to e-cordinate axis of utm projection used by the army of the czech republic. there are two utm 6-degree zones in the czech republic but a unique national coordinate reference system (s-jtsk) generally used by the civilian sector (e.g. in cadastre). geoinformatics fce ctu 10, 2013 21 šíma, j.: quality parameters of digital aerial survey . . . figure 6: coverage of the czech republic with airborne laser scanning data (march 2013) 5. quality parameters of new digital elevation models there are three final products of aerial laser scanning within the bounds of the project of new hypsomentry of the czech republic: • digital elevation model dmr 4g as a grid 5 x 5 m oriented paralelly to axes of national coordinate reference system s-jtsk or to the utm grids in zones 33 and 34 (fig. 7). this product should be always ready for distribution till 6 months after gathering als data. its implicit elevation accuracy given by rmse is 30 cm in open terrain and 1 metre in forested area. suitable applications are: high resolution ortophoto production, draining of precipitation from a catchment basin, modelling of ecological disasters [2]. • digital ground model dmr 5g in the form of triangulated irregular network representing most of terrain break lines too should be ready for distribution till 12 months after gathering als data. its implicit elevation accuracy given by rmse is 18 cm in open terrain and 30 cm in forested area. suitable applications are: new hypsometry of the czech republic (especially contour lines for state map series in scale from 1 : 5000 to 1 : 50 000), modelling of floods, modern spatial planning [3]. • digital surface model (dmp 1g) ready for distribution till 18 months after gathering als data. its implicit elevation accuracy given by rmse is 40 cm on solid objects or bare ground and 70 cm on the surface with full-grown vegetation. suitable applications are: optical visibility in rough terrain, night flying of helicopters, propagation of electromagnetic waves, true orthophoto in built-up areas. the land survey office organized a comprehensive testing for accuracy assessment of the dmr 4g using 240 horizontal test areas in central zone of the czech republic (each equipped geoinformatics fce ctu 10, 2013 22 šíma, j.: quality parameters of digital aerial survey . . . figure 7: dmr 4g covering 68,4 % of the czech republic (march 2013) figure 8: dmr 5g covering 33,4 % of the czech republic (march 2013) with 30 – 100 check points) for determination of an average systematic error in elevations within the whole area mentioned above. it was –0.15 m (under the ground surface). after mass elimination of this systematic error the standard deviation reached 0.08 m only! 970 representative points of terrain surfaces were chosen and geodetically measured in 6 terrain types with various land cover. the results of comparison with identical points (table 4) interpolated from the grid 5 x 5 m showed that expected elevation accuracy has been generally reached except for break lines of roads, embankments and trenches where this grid geoinformatics fce ctu 10, 2013 23 šíma, j.: quality parameters of digital aerial survey . . . figure 9: dmp 1g covering 32,9 % of the czech republic (march 2013) is too smooth for such an application [2]. category of surface systematic error rmse (h) maximum and land cover [m] [m] error [m] roads and highways -0.25 0.34 0.77 hard surfaces without vegetation -0.01 0.07 0.26 parks in built-up areas -0.09 0.14 0.22 arable land -0.01 0.13 0.66 meadows and pastures -0.09 0.18 0.85 shrubs , parkways, forests -0.02 0.13 0.85 average value -0.08 0.17 0.60 table 4: results of testing the dmr 4g elevation accuracy that’s why the main product of aerial laser scanning of the whole state territory will be the digital ground model of 5th generation (dmr 5g). its density (up to two points per square metre) allows to represent most of important terrain break lines, but it will be appropriately reduced on plane or less curved surfaces [3]. category of surface systematic error rmse (h) maximum and land cover [m] [m] error [m] break lines of roads and highways -0.11 0.18 0.66 hard surfaces without vegetation -0.09 0.13 0.37 arable land -0.07 0.14 0.56 meadows and pastures -0.03 0.21 0.42 shrubs, parkways, forests -0.06 0.13 0.46 average value -0 .07 0.14 0.49 table 5: results of testing the dmr 5g elevation accuracy geoinformatics fce ctu 10, 2013 24 šíma, j.: quality parameters of digital aerial survey . . . all products of digital aerial survey and airborne laser scanning from the entire area of the czech republic have been already/or will be step-by-step at disposal on geoportal of the czech office for surveying, mapping and cadastre (http://geoportal.cuzk.cz). acknowledgment the author appreciates employees of the land survey office for their cooperation, data providing and submitting several results of their analyses. references [1] brázdil, k. (2009): projekt tvorby nového výškopisu území české republiky. geodetický a kartografický obzor, 2009, č. 7, pp. 145-151. [2] brázdil, k. et al (2010): technická zpráva k digitálnímu modelu reliéfu 4. generace (dmr 4g). zeměměřický úřad a vojenský geografický a hydrometeorologický úřad. dostupné z http://geoportal.cuzk.cz/dokumenty/technicka_zprava_dmr_4g_15012012.pdf [3] brázdil, k. et al (2011): technická zpráva k digitálnímu modelu reliéfu 5. generace (dmr 5g). zeměměřický úřad a vojenský geografický a hydrometeorologický úřad. dostupné z http://geoportal.cuzk.cz/dokumenty/technicka_zprava_dmr_5g.pdf [4] brázdil, k. et al (2012): technická zpráva k ortofotografickému zobrazení území čr (ortofoto české republiky). zeměměřický úřad a vojenský geografický a hydrometeorologický úřad. dostupné z www.linkon.cz/gao6p [5] šíma, j. (2009): průzkum absolutní polohové přesnosti ortofotografického zobrazení celého území české republiky s rozlišením 0,50, 0,25 resp. 0,20 m v území na západočeské univerzitě v plzni. geodetický a kartografický obzor, 2009, č.9, pp. 214-220. [6] šíma, j. (2010): nové zdroje geoprostorových dat pokrývajících celé území státu od roku 2010 – první výsledky výzkumu jejich kvalitativních parametrů. in: sborník sympozia gis ostrava 2011. všb-tu ostrava, 2011(nestránkováno). isbn 978-80-248-2366-9. [7] šíma, j. (2011): příspěvek k rozboru přesnosti digitálních modelů reliéfu odvozených z dat leteckého laserového skenování celého území čr. geodetický a kartografický obzor, 2011, č. 5., pp. 101-106. [8] šíma, j. (2013): digitální letecké měřické snímkování – nový impulz k rozvoji fotogrammetrie v české republice. geodetický a kartografický obzor, 2013, č.1, pp. 15-21. dostupné z http://egako.eu/wp-content/uploads/2012/11/gako_2013_01.pdf [9] švec, z. (2013): absolutní polohová přesnost ortofota čr vyhotoveného z digitálních leteckých měřických snímků . geodetický a kartografický obzor, 2013, č.2, pp.31-38. dostupné z http://egako.eu/wp-content/uploads/2012/11/gako_2013_02.pdf geoinformatics fce ctu 10, 2013 25 http://geoportal.cuzk.cz/dokumenty/technicka_zprava_dmr_4g_15012012.pdf http://geoportal.cuzk.cz/dokumenty/technicka_zprava_dmr_5g.pdf www.linkon.cz/gao6p http://egako.eu/wp-content/uploads/2012/11/gako_2013_01.pdf http://egako.eu/wp-content/uploads/2012/11/gako_2013_02.pdf geoinformatics fce ctu 10, 2013 26 ________________________________________________________________________________ geoinformatics ctu fce 2011 97 exploitation of countrywide airborne lidar dataset for documentation of historical human activities in countryside petr dušánek charles university in prague, faculty of science albertov 6, prague 2, czech republic p_dusanek@centrum.cz keywords: als, dtm, shaded relief abstract: during three years (2010 – 12) the czech office for surveying, mapping and cadastre in cooperation with the ministry of defense of the czech republic and the ministry of agriculture of the czech republic are providing mapping of the entire area of the czech republic by airborne laser scanning (als) technology. the goal of this project is to derive a highly accurate digital terrain model (dtm) for purposes of administration like detection of flooded areas, orthorectification of areal images etc. such data set also seems to be an interesting da ta source for mapping of human activities in countryside. human settlements, agriculture or mining activities left significant scars on natural landscape. these significant man-made structures are a part of so called cultural landscape. man-made structures include ancient settlements, remains of medieval mining activities or remains of settlements abandoned during 20th century. this article generally presents how to derive information about the man-made structures from raw lidar. examples of significant findings of man-made imprints in countryside are also presented. goal of this article is not to describe a certain archeological site but to inform about strengths of als data to map human activities in countryside, mainly in forested areas. 1. introduction 1.1 project of new elevation mapping of czech republic since spring 2010 the entire area of the czech republic is mapped by technology of airborne laser scanning (als). investors of this project are the czech office for surveying, mapping and cadastre, the ministry of defense and the ministry of agriculture of the czech republic. the czech office for surveying, mapping and cadastre organizes and coordinates the project and is responsible for data processing from ¾ of the area of the czech republic. the ministry of defense provides data acquisition and capacity for processing of data from ¼ of the area. the ministry of agriculture is contributed with resources for the als system leasing. for the data acquisition the czech republic is divided into 3 parts (see figure 1). the first one (“central zone”) was covered in 2010, the “zone west” is being scanned in 2011 and the “zone east” will be scanned in 2012. parameters of the scanning have been set to gain a point cloud with a density about 1 point/m2 and with side overlap of adjacent strips about 50%. there exist two sets of projects: one for spring seasons and one for vegetation seasons. during the spring seasons an average flying height is approximately 1400 m above ground level and a distance of flight lines is about 830 m. during the vegetation seasons an average flying height is approximately 1200 m above ground level and a distance of flight lines is about 715m. figure 1: division of the czech republic for data acquisition ________________________________________________________________________________ geoinformatics ctu fce 2011 98 1.2 airborne laser scanning airborne laser scanner, also called lidar, is an active remote sensing sensor (like radar). difference between radar and lidar is wavelength of used electromagnetic radiation. lidar is working in visible or near infrared spectrum [1]. the basic principle of als is measuring of distance between sensor and point on ground [2]. the distance is measured by transit time of laser beam. by emission of laser pulse measurement of time is started, pulse hits measured surface and returns back. then the measurement of transit time is ended [3]. by this definition it is clear that the distance between sensor and measured point is ½ of the transit time multiplied by the speed of light. the accuracy of measured time is crucial for accuracy of distance measuring. airborne laser scanner system consists of a laser distance meter unit, an opto-mechanical scanner and a processing & control unit [4]. the laser distance meter unit consists of a laser emitter, clocks for measuring transit time and an electro-optical receiver. the opto-mechanical scanner provides emitting laser pulse across a flight direction. the processing & control unit consists of a data recorder and a gnss/imu system, which is necessary for georeferencing the measurements into the reference coordinate system (wgs84). 1.3 als data processing processing of als data is divided into three main phases. the first phase so called preprocessing consists of separation discrete echoes which represent measured surfaces. in the project mentioned above, a full-waveform scanner riegl lms 680 is used. digitizing of full-wave form als data (separation of discrete echoes) is a process of modeling the waveform as series of gaussian distribution functions [5]. the next part of the preprocessing is georeferencing of als data using two inputs: digitized als data and flight trajectory. flight trajectories are reconstructed from on-board gnss/imu data corrected by ground gnss base station data. the last part of the preprocessing phase is a strip adjustment, which is a process of relative orientation of neighbouring flying strips eliminating residual errors in als data. the strip adjustment is done by matching of tie objects (tie planes) in overlapping areas [6]. the output of the preprocessing is a point cloud which is a large set of mass 3d points. each point can also carry some additional information (intensity, gnss time, etc.). the point cloud contains a mixture of ground points and points lying on objects above bare earth. the process of separation of ground and non-ground points is called filtration. there are several filtering algorithms which are described and compared in [7]. the robust filtering algorithm (used in the project mentioned in 1.1) is iteratively leveling down the reference surface. points too high above the reference surface are sorted out and they do not affect height of the reference terrain in next step. automatic filtering of als data is quite successful but manual checking and editing of errors is still necessary. digital terrain models (dtm) can by derived from ground points. dtms have a form of a regular grid or an irregular triangular network (tin). 2. als derivates suitable for archeological prospection 2.1 digital terrain model digital terrain model (dtm) is a 3-d representation of terrain including natural structures (hills, valleys etc.). highly accurate and detailed dtms include also man-made structures permanently changing character of natural terrain. such man-made structures include linear transportation objects like road and railroad causeways, rock fill dams etc. distinctness of a man-made structure in dtm depends on its size and its age. dtm generally exists in two forms, grid (figure 2 a) and tin (figure 2 b). the advantage of regularly spaced grid models is in their simple representation which is profitable especially in computations. the disadvantage of grid models is a fact that they lost information about terrain edges. the size of the smallest object represented by a model depends on resolution of a grid model. irregularly spaced mass points (als point cloud) are a source for tin model generation. chosen mass points become nods of triangular network, nods are connected by edges and 3 edges form a triangle. a tin model created directly prom als data is too dense. density can be reduced by removing irrelevant points (omit points on flat terrain). a tin model better depicts breaklines of terrain as rock edges, dike edges, etc. and formlines of terrain as a ridge lines and valley lines. density of tin nods is higher in rugged terrains and on terrain edges. 2.1 visualization of dtm dtms are suitable products for many applications like orthorectification of aerial images, flood risk modeling, pedological modeling etc. archeology and cultural heritage exploration and documentation need suitable visualizations derived from dtm. the most common visualizations of dtms are hypsometry (figure 3 a), contour lines (figure 3 b), shaded relief (figure 3 c) or their combination (figure 3 d). ________________________________________________________________________________ geoinformatics ctu fce 2011 99 figure 3: visualization of dtm (source: archive of land survey office). left: visualization of grid dtm, right: visualization of tin dtm figure 3: classical visualizations of elevation data for archeological purposes shaded relief is the most suitable traditional visualization method of elevation data. it is mostly generated with the illumination azimuth of 315° (north-west) and the illumination altitude of 45°. disadvantage of the shaded relief is a fact that linear structures parallel to illumination are not displayed [9]. this problem can be solved by using two shaded reliefs with perpendicular illuminations. figure 4 a) shows shaded relief illuminated from north-west and figure 4 b) illuminated from south-west. in figure 4 a) a linear structure inside the red polygon is obvious; in figure 4 b) a perpendicular linear structure inside the green polygon is well recognizable. figure 4: shaded reliefs with perpendicular illuminations. left: classical illumination from northwest. right: illumination from southwest. ________________________________________________________________________________ geoinformatics ctu fce 2011 100 usage of shaded relief is limited by the illumination azimuth. a combination of shaded relief and terrain slope (figure 5a) ensures improved visualization in comparison with terrain slope visualization itself mentioned in [9]. a combination of shaded relief and terrain aspect is another possible visualization method (figure 5 b). as it was already mentioned, a different angle of shaded relief illumination shows linear structures in different directions. shaded relief is represented by gray scale image. a rgb image is composed from three gray scale images. this paper presents color composed shaded relief which is a combination of three shaded relief images (red, green and blue color) illuminated from azimuths 315°, 1ř5° and 75° (figure 5 c). a color composed rgb shaded relief seems to be ideal for archeological exploitation. color of illuminated surface represents aspect and hue represents slope of terrain. figure 5: alternative visualizations of elevation data: a) shaded relief with transparency slope layer, b) shaded relief with transparency aspect layer, c) color composed rgb shaded relief. 3. examples of detection of man-made structures in elevation data 3.1 ancient celtic oppidum the oppidum vladau is situated in cadastral area of the villages záhouice and vladouice in the county of karlovy vary. the acropolis is situated on a plateau of the hill elevated about 230 m above the valley of the river stuela. the oppidum was constructed gradually in a longer time period. its global area is 115.3 ha, the area of the acropolis is 13.4 ha. the residuals of a circumferential fortification defending the acropolis have a form of rampart [10]. visualization of an orthophoto is on figure 6 a) and a color composed rgb shaded relief is on figure 6 b). figure 6: a) orthofoto and b) color composed rgb shaded relief of the oppidum vladau ________________________________________________________________________________ geoinformatics ctu fce 2011 101 3.2 medieval mining activities at steep slopes of krušné hory in the municipality krupka there are situated many remains of mining activities. as mentioned on official web pages of the municipality [11], first mentions of mining in this region come back to the 10th century. at figure 7 it is easy to recognize dumps around surface mining pits (the red polygon). in the green polygon there is a linear structure which appears as a fallen underground shaft. figure 7: color composed rgb shaded relief of mining fields near dolní krupka 3.3 post world war 2 abandoned settlement the abandoned settlement linda (german lindau) is one of typical examples of historical changes after world war 2 in the border regions of the czech republic. according to web pages dealing with perished villages [12], the bordering zone was the reason of the linda settlement destruction. the number of inhabitants was 173 in 1921 and 11 in 1950, the number of buildings was 26 in 1921 and 14 in 1950 [12]. in figure 8 there is visualization of an orthophoto a) and a color composed rgb shaded relief b) of the abandoned settlement linda. figure 8: a) orthophoto and b) color composed rgb shaded relief of the abandoned settlement linda ________________________________________________________________________________ geoinformatics ctu fce 2011 102 4. conclusion this paper presents a big potential of als data for archaeological investigations. man-made structures in landscape can be distinguished from natural terrain. it is important to prepare a suitable source of information for man-made structures detection from als data. first of all it is necessary to filter out points reflected from nonground objects. the filtration process is necessary for many other applications exploiting elevation data. for archaeology it is crucial to visibly detect significant elevation changes in dtm. for such purposes shaded relief is suitable. to detect structures in all directions it is good to combine shaded relief with slope [9] or aspect. this paper presents new technology of visualization elevation data. this technology is based on composition of three shaded reliefs, each illuminated from different azimuth. by composition is obtained rgb image showing linear structures in all directions. data density is the only als data limiting factor for its usefulness in archaeological prospection. this circumstance is mostly influenced by used scanner, flying height and vegetation season. data used for this paper is acquired for the whole country in a short period. mapping in scale 1: 5000 is its basic purpose. the data is not dense enough and not whole data set is taken in ideal vegetation season like in [9]. nevertheless the als data can be an interesting source for archaeological investigation especially in wooded areas. 5. acknowledgments i would like to thanks to land survey office of the czech republic and to czech office for surveying, mapping and cadastre for kind borrow of data and software. 6. references [1] hudak, a.t., lefsky, m.a., cohen, w.b., berterretche, m.: integration of lidar and landsat etm+ data for estimating and mapping forest canopy height, remote sensing of environment, 82(2002)2-3, 397-416. [2] huising, e.j., gomes pereiea, l.m. : errors and accuracy estimates of laser data acquired by various laser scanning systems for topographic applications, isprs journal of photogrammetry & remote sensing, 53(1998)5, 245-261. [3] kašpar, m., pospíšil, j., štroner, m., kuemen, t., tejkal, m.: laserové skenovací systémy ve stavebnictní, praha, stavební fakulta čvut, 2003. [4] wehr, a., lohr, u. (1999): airborne laser scanning – an introduction and overview, isprs journal of photogrammetry & remote sensing, 54(1999)2-3, 68-82. [5] wagner, w., ullrich, a., ducic, v., melzer, t., studnicka, n.: gaussian decomposition and calibration of novel small-footprint full-waveform digitising airborne laser scanner , isprs journal of photogrammetry & remote sensing, 60(2006)2, 100-112. [6] kager, h.: discrepancies between overlapping laser scanner strips – simultaneous fitting of aerial laser scanner strips, international archives of photogrammetry and remote sensing, 35(2004)b1, 555-560. [7] sithole, g., vosselman, g.: experimental comparison of filtering algorithms for bare-earth extraction from airborne laser scanning point clouds, isprs journal of photogrammetry & remote sensing, 59(2004), 85-101. [8] kraus, k., pfeifer, n.: determination of terrain models in wooden areas with airborne laser scanner data , isprs journal of photogrammetry & remote sensing, 53(1998)4, 193-203. [9] doneus, m., briese, c.: full-waveform airborne laser scanning as a tool for archaeological reconnaissance, proceedings of the 2nd international conference on remote sensing in archaeology, rome, december 2006, 99-106. [10] chytráček, m., šmejda, m.: opevněný areál na vladaři a jeho zázemí. k poznání sídelních struktur doby bronzové a železné na horním toku střely v západních čechách. archeologické rozhledy 57(2005), 3-56. [11] pavelka,k., svatušková,j., bukovinský,m.: using of vhr satellite data for potential digs localization and their verification using geophysical methods, “1st earsel international workshop on “advances in remote sensing for archaeology and cultural heritage management”, rome, 30.ř. – 4.10.2008 [12] pavelka, k. pikhartová, l. faltýnová, m. tezníček, j.: combining of aerial laser scanning data, terrestrial mobile scanned data and digital orthophoto. proceedings of 31st acrs conference [cd-rom]. hanoi: acrs, 2010, vol. 1, p. 351-356. isbn 3-11-017708-0 [13] faltýnová, m. pavelka, k.: mobile laser scanning data combining with aerial laser scanning data and orthophoto. elmf 2010 conference proceedings [cd-rom]. nailsworth: intelligent exhibitions ltd, 2010, p. 385391. isbn 978-0-625-84328-2 [14] http://www.krupka-mesto.cz/, 2011-05-24 [15] http://www.zanikleobce.cz/, 2011-05-24 http://www.krupka-mesto.cz/ http://www.zanikleobce.cz/ gui pro orchestraci geowebových služeb frantǐsek kĺımek institute of geoinformatics, vsb-tu of ostrava frantisek.klimek.hgf@vsb.cz kĺıčová slova: geoweb, geoinformatika, webové služby, orchestrace, bpel, gui abstrakt součást́ı výzkumného projektu ” orchestrace služeb pro geoweb” ga 205/07/0797 řešeného na institutu geoinformatiky všb-tu ostrava, zabývaj́ıćıho se možnost́ı orchestrace webových služeb z oblasti gis a ověřeńım praktických možnost́ı dostupných jazyk̊u pro popis a plánováńı obchodńıch proces̊u je i část zabývaj́ıćı se návrhem grafického uživatelského rozhrańı, které by umožňovalo uživatel̊um na r̊uzných úrovńıch funkcionality pracovat s těmito orchestry služeb. jaká je mı́ra funkcionality, kterou jednotliv́ı uživatelé požaduj́ı? má jim být umožněno vyhledávat orchestry, spouštět je, parametrizovat, upravovat, či dokonce navrhovat? na tyto otázky se snaž́ı odpovědět následuj́ıćı řádky, ve kterých jsou shrnuty základńı údaje o orchestraci v oblasti geowebu, analýza a popis charakteristik jednotlivých uživatel̊u i návrh samotného grafického rozhrańı koncového uživatele a popis komponent, které by měl být v tomto rozhrańı pro práci s orchestry k dispozici. úvod webové služby se neodvratně stávaj́ı součást́ı většiny informačńıch systémů. s rostoućım počtem volně dostupných i komerčńıch služeb se nab́ıźı možnosti jejich vzájemného propojováńı do funkčńıch celk̊u. pouhým statickým spojováńım služeb nejsme schopni využ́ıt jejich potenciál, natož potenciál servisně orientované architektury (soa), která přitahuje zájem všech oblast́ı it pr̊umyslu a rychle proniká do hlavńıch chod̊u aplikaćı zásadńıch pro plněńı obchodńıch operaćı. proto je zapotřeb́ı zač́ıt služby řetězit dynamicky, tzn. spojovat je dle aktuálńıch potřeb, možnost́ı uživatele (stav připojeńı, finance, požadovaná přesnost výsledk̊u, rychlost odezvy, ap.). v současnosti se mluv́ı o dvou zp̊usobech řetězeńı webových služeb, známých jako orchestrace a choreografie [pram]. orchestrace standardńı technologie jako např. wsdl (web service description language), soap (simple object access protocol), uddi (universal description, discovery and integration) pracuj́ıćı s webovými službami nám poskytuj́ı prostředky pro jejich jednotlivý popis, lokalizaci geinformatics fce ctu 2008 109 gui pro orchestraci geowebových služeb a spouštěńı. i když webová služba může poskytovat mnoho metod, každý wsdl soubor popisuje doslova atomické (na ńızké úrovni) funkce. co nám však tyto základńı technologie neposkytuj́ı, jsou d̊uležité detaily, které popisuji chováńı služby jako součást větš́ı, v́ıce komplexńı spolupráce. když se jedná o spolupráci, která je kolekćı aktivit (metod, služeb) navržených tak, aby úspěšně plnila daný business ćıl, jedná se o tzv. business proces. a právě popis kolekćı aktivit, který tento business proces vytvář́ı je nazýván orchestrace [pram]. v rámci projektu proběhla analýza několika, pro orchestraci běžně použ́ıvaných jazyk̊u a po této analýze byl pro potřeby orchestrace v prostřed́ı geowebu shledán jako vyhovuj́ıćı jazyk, jazyk bpel. hlavńı funkćı bpel je orchestrace webových služeb, tedy ř́ızeńı souhry funkcionality, kterou nab́ıźı ”backend” část systému, či v́ıce systémů. tato funkcionalita je dekomponována do operaćı, jež je možné volat přes webovou službu. na druhé straně bpel sám stoj́ı za webovou službou, která definuje jeho rozhrańı, tj. vstupńı operace. pro každý vstup do procesu (v bpmn objekt start / intermediate messageevent) je tedy ve webové službě, která popisuje rozhrańı bpelu, jedna operace. vstupy procesu však nemuśı být výhradně na začátku, asynchronńı procesy mohou mı́t vstupy na r̊uzných mı́stech. dá se tedy ř́ıci, že bpel implementuje webovou službu. přitom aplikace, která webovou službu použ́ıvá, nev́ı, zda se za ńı skrývá proces, či zda je implementována např. ejb modulem. bpel je rovněž nezávislý na platformě, implementace pro něj existuj́ı na platformě java ee, .net a jiných platformách. proces implementovaný v jazyce bpel pomoćı jednoho nástroje by také mělo být možné přenést a spustit v nástroji jiném. někteř́ı výrobci byznys proces management systému (bpms) ale použ́ıvaj́ı svá vlastńı rozš́ı̌reńı jazyka bpel, která tuto přenositelnost znemožňuj́ı [tbpel]. architektura navrženého systému jedńım z hlavńıch ćıl̊u grantového projektu je stanovit metodiku a popsat architekturu, jak by mohl celý komponovaný systém zahrnuj́ıćı služby v r̊uznorodých formách, orchestry, katalogy atd., vypadat a spolupracovat. pro návrh grafického uživatelského rozhrańı je samozřejmě nutné tuto architekturu alespoň v základńı rovině znát a vědět, kde do této architektury komponenta grafického rozhrańı vstupuje. v následuj́ıćıch několika řádćıch je tedy popsána architektura systému, dle výzkumného projektu, v jej́ı aktuálńı podobě. do ukončeńı projektu lze předpokládat ještě jej́ı daľśı možné změny, neměly by však být nikterak dramatické. nemělo by tedy doj́ıt k žádné převratné změněně konceptu grafického rozhrańı. jádrem orchestrace je registr služeb, který poskytuje mechanizmy pro registrováńı, kategorizováni a hlavně vyhledáváńı webových služeb v reálném čase. pokud uživatel potřebuje využ́ıt nějakou specifickou službu, prohledá daný registr. tam źıská jej́ı popis a může ji zač́ıt použ́ıvat. registr je však zaměřen nejen na služby, ale i na procesy, které svým rozhrańım v podstatě službám odpov́ıdaj́ı a obsahuje i rozhrańı umožňuj́ıćı vyhledáváńı služeb dle popisu, parametr̊u, kĺıčových slov, podle výkonnostńıch metrik, typu atd. právě k tomuto registru, či sadě registr̊u spojených a potenciálně i vzájemně spolupracuj́ıćıch se připojuje uživatel prostřednictv́ım svého grafického uživatelského rozhrańı (gui) a vyhledává potřebné služby, či procesy. hlavńım požadavkem gui aplikace je tedy možnost komunikace s registrem služeb a formulace požadavk̊u uživatele v jemu srozumitelné podobě a následná vizualizace odpověd́ı registru opět v uživatelský př́ıvětivé formě. celá architektura je znázorněná na obr. 1, kde jsou zobrazeny jej́ı jednotlivé komponenty. geinformatics fce ctu 2008 110 gui pro orchestraci geowebových služeb obr. 1: jednotlivé komponenty navrženého systému � service 1..n � adapter � monitoring � service register � bpel procesor � gui gui jedńım z výstup̊u zmiňovaného grantového projektu má být i grafické uživatelské rozhrańı (gui, z angl. graphical user interface). rozhrańı má umožňovat práci s orchestry. původńı plán byl, aby v ńı šly orchestry i vytvářet, toto se však zdá jako nevhodné (viz. dále v textu). k tomuto úkolu je vhodněǰśı využ́ıt exterńı aplikaci. gui by tedy mělo umět ”jen” vizualizovat orchestr s aktuálńımi instancemi služeb a dovolit uživateli zvolit jiné instance služeb (pomoćı vyhledáńı v registru a umožnit tak uživateli optimalizovat orchestr dle jeho individuálńıch požadavk̊u). systém by mohl řešit i potřeby uživatel̊u, alespoň s využit́ım základńı sady parametr̊u profilu uživatele. tj. měl by být definován kontext uživatele a podle něj ve znalostech nalezen adekvátńı orchestr (resp. jeho instance). takto navržené a popsané gui by mělo následně být implementováno např. jako plugin do některé z desktop gis aplikaćı (jako vhodná aplikace se jev́ı openjump [oj]), nebo př́ıstupné jako webová aplikace, což se taktéž jev́ı jako velmi vhodná varianta vzhledem k možnému dopadu na velké množstv́ı potenciálńıch uživatel̊u. druhá zmiňovaná varianta by mohla být reprezentována např. implementaćı společně s openlayers [ol], což je javascriptová knihovna umožňuj́ıćı zobrazovat mapy v prohĺıžeči bez závislosti na serverové části. uživatelé pokud je požadavkem navržeńı gui, s kvalitńım, srozumitelným a intuitivńım ovládáńım, je třeba netradičně zač́ıt od středu – tj. od u. gui je předevš́ım navrhováno pro uživatele, geinformatics fce ctu 2008 111 gui pro orchestraci geowebových služeb je tedy nutnost vyj́ıt z analýzy uživatel̊u, kteř́ı budou k procesu přistupovat a analyzovat taktéž jejich potřeby. zajisté každý z nich bude mı́t jiné představy a požadavky jak by mělo gui vypadat, jakou mı́ru detail̊u o daném procesu má poskytovat a co vše má umožňovat. nejdř́ıve je tedy potřeba pod́ıvat se na role a uživatele, kteř́ı k procesu přistupuj́ı. při pohledu na některé zdroje informaćı o tomto tématu, např. [ubpm], [rbpm], [tilsoa], nebo [bossoa] lze nalézt velké množstv́ı r̊uznorodých roĺı, které jsou v́ıce, či méně nezbytné pro správné navrhováńı a údržbu proces̊u postavených na této architektuře. pro př́ıklad jen jmenujme některé z nich (bližš́ı popis jednotlivých roĺı a jejich kompetenćı lze nalézt ve zmiňovaných zdroj́ıch): � vlastńık procesu � vrcholový (strategický, top) tým, nebo manažer � liniový manažer � animátor bpm � it specialista � business konzultant � architekt bms � procesńı týmy � agent inovace � centrum inovace � zákazńık procesu toto děleńı vycháźı z prostřed́ı enterprise aplikaćı a firem, které obdobné technologie a procesy postavené na servisně orientované architektuře využ́ıvaj́ı. zajisté se nejedná o kompletńı a neměnný seznam, protože v každé společnosti můžou být role upravené k aktuálńı potřebě společnosti a podobně [rlbpm]. v námi popisovaném prostřed́ı však omeźıme množstv́ı uživatel̊u pouze na následuj́ıćı dvě skupiny, které jsou z hlediska návrhu gui pro registr služeb a orchestraci z našeho hlediska podstatné. uživatelé vytvářej́ıćı proces jedná se o uživatele, kteř́ı vytvářej́ı určitý proces a umožňuj́ı jej využ́ıvat. zpravidla se jedná o firmy vytvářej́ıćı procesy, zahrnuj́ıćı např. jimi vytvářené služby. účelem je tedy využ́ıváńı jejich služeb, z čehož vyplývaj́ı např. finančńı zisky, nebo reklama apod. druhou skupinou vytvářej́ıćı procesy mohou být nadšenci, které zaj́ımaj́ı tyto technologie, nebo vytvoř́ı proces pro vlastńı potřebu a rádi se o něj poděĺı s jinými. tito uživatelé zpravidla maj́ı k dispozićı lidi (nebo jsou jimi sami), kteř́ı se vyznaj́ı v návrhu a vytvářeńı proces̊u, jedná se tedy o týmy, které obsahuj́ı pracovńıky, kteř́ı nejen že maj́ı znalosti z této problematiky, ale maj́ı obvykle k dipozici i potřebné programové vybaveńı nejen pro návrh, ale i pro implementaćı procesu na nějaký aplikačńı server. lze je tedy označit, jako uživatelé vytvářej́ıćı procesy, geinformatics fce ctu 2008 112 gui pro orchestraci geowebových služeb kteř́ı následně proces chtěj́ı zaregistrovat do registru služeb a maj́ı zájem na jeho využ́ıváńı. z hlediska kontextu návrhu gui lze konstatovat, že tito uživatelé maj́ı již většinu potřebného – at’ již ve formě komerčńıch řešeńı, nebo řešeńı postavených na programech s otevřeným zdrojovým kódem – k dispozici, neńı pro ně tedy třeba vymýšlet daľśı nástroje, které jim umožńı proces vizualizovat, upravovat, apod. uživatelé využ́ıvaj́ıćı proces existuje však druhá skupina uživatel̊u, kteř́ı jsou konzumenty takto vytvořených proces̊u a chtěj́ı je pouze spouštět, či drobně upravovat (parametrizovat) apod. jedná se tedy o uživatele, kteř́ı si chtěj́ı vyhledat konkrétńı proces a s t́ım pracovat, nejčastěji pouze źıskat jeho popis a spustit jej. tato práce, která spoč́ıvá v komunikaci z registrem služeb, má být uživatelsky př́ıvětivá a nevyžaduj́ıćı hlubš́ı znalosti z oblasti soa. žádné takové uživatelské prostřed́ı, zvláště pro potřeby komunikace s navrženým registrem, však v současné době neńı k dipozici. jaké má být? co má uživateli zpř́ıstupňovat? požadavky uživatel̊u v následuj́ıćıch řádćıch jsou popsány možné požadavky uživatel̊u na toto gui. požadavky jsou seřazeny od těch nejjednodušš́ıch, až po pokročileǰśı, které sahaj́ı, až na hranici návrhu proces̊u – tzn. na hranici s nástroji určenými pro skupinu uživatel̊u vytvářej́ıćı procesy. � vyhledáńı potřebného procesu hlavńım a základńım požadavkem uživatel̊u je nalezeńı jimi požadovaného procesu, nebo služby. uživatel̊um muśı být samozřejmě nab́ıdnuto upřesněńı vyhledáváńı v závislostech na metrikách zjistitelných z registru služeb. � spouštěńı vybraného proces̊u společně s výše jmenovaným požadavkem na nalezeńı procesu je spuštěńı procesu druhým a zároveň posledńım hlavńım požadavkem. kdyby gui odpov́ıdalo pouze těmto dvěma požadavk̊um, lze předpokládat, že by bylo dostačuj́ıćı pro valnou většinu uživatel̊u využ́ıvaj́ıćı služeb registru. � parametrizace procesu – úprava na základě metrik v závislosti na mı́̌re, v jaké chce uživatel s procesem pracovat lze mluvit o jednoduché a složitěǰśı parametrizaci. jednoduchou je myšlena pouhá úprava vstupńıch parametr̊u procesu, či výběr v závislosti na jakém kritériu má být proces upraven apod. uživatel̊uv požadavek může např. zńıt – využij pouze služby, které jsou zdarma. v př́ıpadě této jednoduché parametrizace je tedy práce ponechána na straně jádra orchestrace a přeb́ırá tedy do své režie logiku výběru. na vstup je pouze poslána šablona, kterou jádro uprav́ı do konkrétńı podoby a výsledek opět vrát́ı uživateli. naproti tomu v př́ıpadě složitěǰśı parametrizace přeb́ırá zodpovědnost a logiku již na sebe sám uživatel a vybere si např. pouze zástupnou službu za jednu konkrétńı, kde vyžaduje např. vyšš́ı přesnost. � podpora pro workflow geinformatics fce ctu 2008 113 gui pro orchestraci geowebových služeb některé procesy lze definovat jako dlouho trvaj́ıćı procesy s lidskou interakćı (human task management) [ubpm], u těchto by bylo vhodné zahrnout do tohoto jednotného gui potřebné uživatelské rozhrańı tuto interakci zprostředkuj́ıćı. bude-li tedy do výsledku zahrnuta některá služba, požaduj́ıćı zpřesňováńı vstupu apod., je nežádoućı, aby uživatel nějakým zp̊usobem hledal, kde má zpřesněńı zadávat, ale je vhodné, aby uživateli byla nab́ıdnuta, např. v rámci sledováńı stavu orchestru, jednoduchá možnost toto zpřesněńı provést. pokud tedy v pr̊uběhu procesu dojde např. k požadavku, aby uživatel upřesnil zda analýza má být provedena pro obec janovice nad úhlavou, nebo janovice (okr. f-m), uživatel toto upřesněńı provede výběrem z nab́ızených možnost́ı př́ımo v navrhovaného gui. � zobrazeńı procesu požadavek na zobrazeńı procesu se vyskytne nejen u skupiny uživatel̊u, kteř́ı budou cht́ıt složitěǰśım zp̊usobem parametrizovat, či upravovat nab́ıdnutý proces, ale jistě se vyskytne i skupina uživatel̊u, kteř́ı budou pouze cht́ıt vidět, které služby jsou zapojeny apod. � uložeńı procesu po úpravě procesu do podoby žádané uživatelem, budou někteř́ı uživatelé cht́ıt upravený proces uložit do registru služeb, aby si zajistili jeho znovupoužitelnost v již jednou editované podobě. zobrazeńı procesu a vyhledáváńı v závislostech na uživateli. tento bod naplňuje potřeby uživatel̊u, kteř́ı rádi využ́ıvaj́ı práce v kontextu uživatele, kdy aplikace v́ı o uživateli a nab́ıźı mu výsledky určené právě pro něj. uživateli v jehož profilu jsou tedy informace o tom, že je ” spořivý“ a využ́ıvá pouze služby zdarma, nebudou nab́ızeny placené služby. � sledováńı stavu umožňuje uživateli sledovat v jakém stavu se j́ım spuštěný proces momentálně nacháźı a zobrazuje informace např. o tom, jak dlouhá doba je předpokládaná do dokončeńı spuštěného procesu. � monitoring někteř́ı uživatelé budou vyžadovat bližš́ı informace o prob́ıhaj́ıćım procesu a budou cht́ıt znát informace o tom, která služba je právě zapojená, na kterou službu se čeká apod. vhodné by bylo zobrazeńı procesu společně s vyznačeńım právě prob́ıhaj́ıćıch krok̊u. � debuging v př́ıpadě neúspěšného provedeńı orchestru budou někteř́ı uživatelé zajisté cht́ıt vědět, proč došlo k jeho selháńı, v kterém mı́stě apod. debuging by jim měl umožnit provést proces krokovaně a odhalit tedy slabé mı́sto, nalézt mı́sto – službu, která vraćı nesprávné, nebo žádné výsledky apod. na základě toho si budou uživatelé moci vybrat zástupnou službu za slabé mı́sto v procesu a tak provést požadovaný proces např. rychleji – po odhaleńı pomalé služby dojde k jej́ımu nahrazeńı za službu poskytuj́ıćı použitelná obdobná data rychleji. � návrhář proces̊u pro skupinu uživatel̊u – konzument̊u proces̊u se jev́ı jako nepotřebné – viz. výše v textu. geinformatics fce ctu 2008 114 gui pro orchestraci geowebových služeb prvky gui gui bude složeno z jistých element̊u, které by byly jednak samostatně použitelné, ale jistým zp̊usobem i provázané. na základě práce uživatele budou interaktivně zobrazeny aktuálńı prvky, které by mohly být k dané činnosti vhodné. prvky jsou vypsány v pořad́ı, který se snaž́ı korespondovat s možnými požadavky uživatel̊u. � searchbox � pole s výsledky � dialog pro práci s procesem � dialog zobrazeńı podrobných informaćı o procesu � dialog pro jednoduchou parametrizaci � dialog pro vizualizaci procesu � mapové pole � tlač́ıtko pro spuštěńı procesu � tlač́ıtko pro uložeńı procesu � sledovač pr̊uběhu procesu � monitor procesu � debuger procesu � zobrazeńı výsledku procesu � přihlašovaćı dialog podoba zobrazeného procesu při návrhu nového procesu se obvykle použ́ıvá bpmn. primárńım ćılem bpmn je však poskytnout notaci, která je snadno srozumitelná všem business uživatel̊um: business analytik̊um, kteř́ı navrhuj́ı procesy, technickým vývojář̊um, kteř́ı implementuj́ı technologie pro vykonáváńı proces̊u a manager̊um, kteř́ı tyto procesy monitoruj́ı a ř́ıd́ı. bpmn vytvář́ı standardizovaný most mezi návrhem business proces̊u a jejich implementaćı. daľśım ćılem bpmn je umožnit vizualizaci xml jazyk̊u určených pro návrh a vykonáváńı proces̊u (jako např. bpel4ws) prostřednictv́ım business-orientované notace [reen]. až potom je obvykle tento zápis navrhovaného procesu, převeden do jeho implementace v bpel, bpml, či jiném jazyce pro spouštěńı proces̊u. bpmn tedy definuje, jak převádět jednotlivé elementy a sekvence těchto element̊u do jazyka bpel. je tedy možné model procesu do jeho spustitelné podoby převést. dı́ky poměrné volnosti modelováńı v bpmn však nebývá obvykle možné vygenerovat bpel automaticky, některé bpms nástroje však tuto funkci nab́ızej́ı, a to za cenu určitých omezeńı při samotném modelováńı procesu [ubpm3]. možnost automatické generováńı lze zajistit i striktńım dodržeńım pravidel definovaných v bpmn. geinformatics fce ctu 2008 115 gui pro orchestraci geowebových služeb oproti bpmn nemá bpel, žádnou implicitńı grafickou reprezentaci a slouž́ı k popisu procesu už na vykonatelné úrovni, v podstatě jde o programový kód. právě bpel však bude pro potřeby vizualizace procesu v gui př́ıstupný z registru. některé z programových nástroj̊u slouž́ıćıch pro potřeby tvorby aplikaćı založených na soa, jako jsou např. netbeans [nb], nám usnadňuj́ı přechod z bpmn na bpel t́ım, že se snaž́ı použ́ıvat stejné grafické prvky, to ale rozhodně nebývá pravidlem [tbpel]. tato cesta se vzhledem k tomu, že v registru budou služby uloženy ve formě jazyka bpel, jev́ı jako vhodná. proces je vizualizován v podobě, který je při troše snahy pochopitelný i pro mı́rně pokročilé uživatelé a lze předpokládat, že právě pokročileǰśı uživatele budou vyžadovat pokročileǰśı funkcionalitu práce s orchestry. na následuj́ıćım – obr. 2 – je zobrazen ukázkový proces vytvořený a vizualizovaný právě v programovém produktu netbeans a na obr. 3 je proces vizualizován pomoćı weep engine [weep], který umožňuje konverzi souboru bpel do podoby svg, nebo png. tento engine by mohl být dobře využitelný pro potřeby funkčńı implementace popisovaného gui. obr. 2: bpel proces vizualizován pomoćı netbeans [nb] geinformatics fce ctu 2008 116 gui pro orchestraci geowebových služeb obr. 3: bpel proces vizualizován pomoćı weep engine [weep] scénář práce v následuj́ıćıch řádćıch je popsán možný scénář práce s grafickým rozhrańım pro orchestraci geowebových služeb. v závislosti na formě – desktop aplikace, či webovém rozhrańı, uživatel zaháj́ı práci vyvoláńım nab́ıdky menu v aplikaci, pro niž bude např. vytvořen plugin, nebo spust́ı internetový prohĺıžeč a zadá webovou adresu, kde bude klientská aplikace ve formě webové aplikace. následně bude uživateli zobrazeno následuj́ıćı výchoźı dialogové okno, které bude obsahovat textové pole a mapové pole, oboje určeno k vyhledáváńı služeb. bude zde i volba pokročilé, které umožńı zpřesnit požadovaný vyhledávaný výraz, nebo již v tuto chv́ıli určit, aby výsledné orchestry byly vráceny parametrizované, např. dle ceny. měla by zde být i možnost přihlášeńı uživatele, kterou by následně byly ovlivněny vyhledávané služby a procesy. uživateli budou následně zobrazeny vyhledané služby a orchestry s možnost́ı zobrazeńı si v́ıce podrobnosti. pro zobrazeńı podrobnost́ı geografických bude využito opět komponenty zprostředkovávaj́ıćı mapové výstupy. geinformatics fce ctu 2008 117 gui pro orchestraci geowebových služeb obr. 4: návrh úvodńı stránky portálu slouž́ıćıho běžným uživatel̊um obr. 5: zobrazeńı vyhledaných služeb po vybráńı daného orchestru bude uživateli př́ımo umožněna jeho jednoduchá parametrizace, nebo spuštěńı vybraného orchestru. v př́ıpadě požadované úpravy procesu bude proces registrem upraven a opět vrácen v obdobném dialogovém okně (webové stránce) a parametrizace se může stále opakovat, dokud nebude uživatel spokojen. geinformatics fce ctu 2008 118 gui pro orchestraci geowebových služeb obr. 6: zobrazeńı všech podrobnost́ı o procesu, včetně možnosti parametrizace a spuštěńı při volbě složitěǰśı parametrizace bude uživateli zobrazen proces v jeho grafické podobě – viz. obr. 2, nebo obr. 3. při požadavku záměny služby za jinou bude opět využ́ıván dialog pro vyhledáváńı služeb a jejich volba. po spuštěńı procesu bude uživateli zobrazen dialog o pr̊uběhu a následně zobrazen výsledek. navržené rozhrańı z výše zmı́něných řádk̊u je patrné, že gui bude přistupovat k service registru a bpel procesoru. v následuj́ıćıch řádćıch je popsáno základńı rozhrańı v̊uči těmto dvěma zmiňovaným komponentám. gui – registr služeb geinformatics fce ctu 2008 119 gui pro orchestraci geowebových služeb obr. 7: zobrazeńı informaćı o pr̊uběhu spuštěného procesu � getservices() – vraćı seznam proces̊u/služeb upravený v závislostech na metrikách, šablonách, či uživateli apod. součást́ı vráceného seznamu jsou i základńı metriky a informace o procesech a službách. � getdetail() – vraćı všechny dostupné informace o procesu, či službě. umožňuje vrátit proces ve formě bpel souboru, který je možno následně vizualizovat. � save() – slouž́ı k uložeńı upraveného procesu do registru služeb, k pozděǰśımu znovupoužit́ı. gui – bpel procesor � execute() – umožňuje zavolat bpel procesor, aby spustil konkrétńı službu uloženou v registru služeb, nebo službu, která je upravena uživatelem a neńı žádané jej́ı uložeńı v registru služeb. závěr v současnosti je grafické uživatelské rozhrańı navržené v teoretické rovině a byly modelově vytvořeny dialogy a komponenty, které by mohly být při práci s orchestry využitelné. pro potvrzeńı použitelnosti a uživatelské př́ıvětivosti však bude nejd̊uležitěǰśı interakce tohoto návrhu př́ımo s uživateli. až po této interakci s vybranou r̊uznorodou skupinou uživatel̊u – v prvé fázi realizované taktéž v rovině teoretické je vhodné přistoupit k realizaci gui, jej́ı praktickou implementaćı. následně je vhodné provést druhé kolo interakce z uživateli a zanést jejich připomı́nky vzniknuvš́ı při reálné práci s navrženým gui. současný návrh vycháźı ze současně navržené architektury, která se ještě může drobně upravit, což se může projevit i v navrženém grafickém rozhrańı. geinformatics fce ctu 2008 120 gui pro orchestraci geowebových služeb reference [bossoa] bose s., bieberstein n., fiammante m., jones k., shah r., soa project planning aspects, online1. [nb] domovská stránka produktu netbeans, online2. [oj] domovská stránka projektu openjump, online3. [ol] domovská stránka projektu openlayers, online4. [pradp] pager, m., řetězeńı webových služeb v prostřed́ı open source gis. diplomová práce. 2007. ostrava. online5. [pram] prager m., marš́ık v., využit́ı orchestrace služeb pro řešeńı úloh v rámci iskř, online6. [rbpm] role bpm, bpm portál, online7. [reen] bpmn & bpel for business analysts, úvod do kurzu, online8. [rlbpm] organizačńı struktury v procesńım ř́ızeńı, bpm slovńıček, online9. [tbpel] vaš́ıček p., seriál bpm prakticky, 5. část: tvorba bpel modulu, online10. [tilsoa] tilkov s., roles in soa governance, online11. [ubpm] vaš́ıček p., seriál bpm prakticky, 1. část: proč bpm s open source nástroji, online12. [ubpm3] vaš́ıček p., seriál bpm prakticky, 3. část: úvod do bpmn, online13. [weep] domovská stránka projektu weep, online14. 1 http://www.informit.com/articles/article.aspx?p=422305&seqnum=5 2 http://www.netbeans.org/ 3 http://openjump.org/wiki/show/homepage 4 http://www.openlayers.org/ 5 http://gisak.vsb.cz/ pra089/texty/dp pra089 v1 0.pdf 6 http://gis.vsb.cz/gis ostrava/gis ova 2008/sbornik/lists/papers/093.pdf 7 http://www.procesy.cz/metodiky/role-bpm.htm 8 http://www.reengine.cz/index/bpmn-and-bpel-for-business-analysts.do 9 http://bpm-slovnik.blogspot.com/2007/09/organizace.html#role 10 http://bpm-sme.blogspot.com/2008/04/5-tvorba-bpel-modulu.html 11 http://www.infoq.com/articles/tilkov-soa-roles 12 http://bpm-sme.blogspot.com/2008/02/1-uvod-do-bpm-pro-sme.html 13 http://bpm-sme.blogspot.com/2008/03/3-uvod-do-bpmn.html 14 http://weep.gridminer.org/index.php/about weep geinformatics fce ctu 2008 121 http://www.informit.com/articles/article.aspx?p=422305\&seqnum=5 http://www.netbeans.org/ http://openjump.org/wiki/show/homepage http://www.openlayers.org/ http://gisak.vsb.cz/~pra089/texty/dp_pra089_v1_0.pdf http://gis.vsb.cz/gis_ostrava/gis_ova_2008/sbornik/lists/papers/093.pdf http://www.procesy.cz/metodiky/role-bpm.htm http://www.reengine.cz/index/bpmn-and-bpel-for-business-analysts.do http://bpm-slovnik.blogspot.com/2007/09/organizace.html#role http://bpm-sme.blogspot.com/2008/04/5-tvorba-bpel-modulu.html http://www.infoq.com/articles/tilkov-soa-roles http://bpm-sme.blogspot.com/2008/02/1-uvod-do-bpm-pro-sme.html http://bpm-sme.blogspot.com/2008/03/3-uvod-do-bpmn.html http://weep.gridminer.org/index.php/about_weep _______________________________________________________________________________________ geoinformatics ctu fce 2011 185 cultural heritage recording utilising low-cost closerange photogrammetry melanie kirchhöfer1, jim chandler1, rene wackrow1 1loughborough university, school of civil and building engineering loughborough, le11 3tu, united kingdom m.k.kirchhoefer@lboro.ac.uk, j.h.chandler@lboro.ac.uk, r.wackrow@lboro.ac.uk keywords: close-range photogrammetry, heritage recording, orientation sensor, low-cost, smartphones abstract: cultural heritage is under a constant threat of damage or even destruction and comprehensive and accurate recording is necessary to attenuate the risk of losing heritage or serve as basis for reconstruction. cost effective and easy to use methods are required to record cultural heritage, particularly during a world recession, and close-range photogrammetry has proven potential in this area. off-the-shelf digital cameras can be used to rapidly acquire data at low cost, allowing non-experts to become involved. exterior orientation of the camera during exposure ideally needs to be established for every image, traditionally requiring known coordinated target points. establishing these points is time consuming and costly and using targets can be often undesirable on sensitive sites. mems-based sensors can assist in overcoming this problem by providing small-size and low-cost means to directly determine exterior orientation for close-range photogrammetry. this paper describes development of an image-based recording system, comprising an off-the-shelf digital slr camera, a mems-based 3d orientation sensor and a gps antenna. all system components were assembled in a compact and rigid frame that allows calibration of rotational and positional offsets between the components. the project involves collaboration between english heritage and loughborough university and the intention is to assess the system’s achievable accuracy and practicability in a heritage recording environment. tests were conducted at loughborough university and a case study at st. catherine’s oratory on the isle of wight, uk. these demonstrate that the data recorded by the system can indeed meet the accuracy requirements for heritage recording at medium accuracy (1-4cm), with either a single or even no control points. as the recording system has been configured with a focus on low-cost and easy-to-use components, it is believed to be suitable for heritage recording by non-specialists. this offers the opportunity for lay people to become more involved in their local heritage, an important aspiration identified by english heritage. recently, mobile phones (smartphones) with integrated camera and mems-based orientation and positioning sensors ha ve become available. when orientation and position during camera exposure is extracted, these phones establish off-the-shelf systems that can facilitate image-based recording with direct exterior orientation determination. due to their small size and low-cost they have potential to further enhance the involvement of lay-people in heritage recording. the accuracy currently achievable will be presented also. 1. introduction cultural heritage plays a vital role in education about the past, in creating cultural or individual identity, and even in providing economical support for local communities [1,2,3]. despite these widely acknowledged benefits, cultural heritage is at a constant risk by neglect and decay, deliberate destruction and damage due to social and economic progress, disasters, and armed conflict [3,4,5]. from this risk, an increased need to record spatially can be recognised. comprehensive and accurate documentation can attenuate the risk of losing heritage and in the worst case serve as a basis for reconstruction [5]. the suitability of properly calibrated consumer-grade cameras for many heritage recording tasks has been demonstrated in [6,7,8]. recognising the desirability to record within a three-dimensional (3d) national reference system, establishing known coordinated target points for exterior orientation estimation remains time consuming and costly and requires surveying expertise. direct exterior orientation estimation for close-range applications could overcome this problem by avoiding expensive target point surveys and enabling non-experts to record cultural heritage within an appropriate national reference system. in that way the cost is reduced even further by the possibility to employ volunteers [9]. direct exterior orientation estimation in close-range photogrammetry can be achieved using orientation sensors based on micro electro mechanical systems (mems) technology that have emerged on the market in recent years. although their accuracy is lower than that of their large-size counterparts, results of utilising them for mobile mapping projects and photogrammetry look promising [10,11]. direct positioning can be _______________________________________________________________________________________ geoinformatics ctu fce 2011 186 achieved using global positioning system (gps) devices. although positioning with current low-cost, handheld gps devices does not meet the requirements for some applications of close-range photogrammetry, there is potential for improvements in the future [12]. one example is the announcement of geneq inc. to release a small-size, high accuracy gps receiver (sxblue iii) that is available for much lower cost than conventional survey-grade gps receivers [13]. this paper presents the development and testing of a low-cost recording system for cultural heritage recording that utilises a low-cost orientation sensor and gps for direct exterior orientation determination. furthermore, the potential of utilising smartphones with integrated camera, orientation and position sensors for low-cost cultural heritage recording is investigated. first the recording system and its components are presented and the data collection and analysis process is explained. this is followed by a description of a recording system performance test at loughborough university and of a case study on the isle of wight, uk. the results of these tests are presented in section 4. in section 5 the methodology of the smartphone test is described and the results of this test are presented. after discussing the results of the recording system and smartphone tests, this paper finishes in a conclusion. 2. methodology 2.1 recording system the recording system presented here comprises a calibrated consumer-grade digital camera (nikon d80) for image acquisition, a small-size 3d orientation sensor (pni tcm5) for orientation measurement, a survey-grade differential gps (dgps) (leica system 500) for 3d positioning, and a laptop for operating the orientation sensor (figure 20a). figure 20: full recording system (a) and mounting frame (b). camera, orientation sensor, and dgps antenna were attached to a purposely built mounting frame that constrains the components in their orientation and position (figure 20b). this enables a reliable calibration of the rotational and positional offsets between components. when the recording system was assembled in early 2010, no low-cost, smallsize dgps receivers were available on the market to provide centimetre accuracy required in this project. therefore, it was decided to use a survey-grade dgps receiver, enabling positioning with centimetre accuracy. although this is certainly not a low-cost component, it facilitates the testing of the principles of direct exterior orientation determination for close-range photogrammetry. the tcm5 orientation sensor is capable of measuring heading, pitch and roll using magnetometers and accelerometers. the expected accuracy of the measured angles is 0.3° in heading and 0.2° in pitch and roll [14]. 2.2 offset calibration in order to achieve accurate exterior orientation parameters of the camera, the rotational offset between camera and orientation sensor and the positional offset between camera and dgps antenna need to be calibrated. exterior orientation parameters for a set of images acquired using the recording system were derived indirectly in a leica photogrammetric suite (lps) bundle adjustment. these parameters were used as truth data and compared to orientation sensor and dgps measurements acquired at the time of exposure. for this purpose a routine was coded in mathworks‟ _______________________________________________________________________________________ geoinformatics ctu fce 2011 187 matrix laboratory (matlab) that used truth and measured data to estimate offset calibration values and their precision. calibration values are defined by the arithmetic mean of the offsets calculated for each image and precision is indicated by the standard deviation. the calibration values were applied to the directly measured orientation and position values in order to derive direct exterior orientation parameters for each image. the matlab routine also included an algorithm to convert the true omega, phi, and kappa values into equivalent heading, pitch and roll values, in order to enable comparison between indirectly derived (omega, phi, kappa) and directly measured (heading, pitch, roll) orientation angles. another algorithm was needed to convert the corrected heading, pitch, and roll values into omega, phi, and kappa that were suitable for utilisation in a bundle adjustment. a detailed description of the offset calibration process will be presented in a future publication. 2.3 data collection and analysis for testing the performance of the recording system, data was recorded from a varying number of camera stations adjacent to a test object which included coordinated points. a camera station here is defined as the position and orientation of the mounting frame at the time of image acquisition. for each acquired image, orientation and position at the time of exposure was measured by the orientation sensor and the dgps receiver, respectively. imagery, orientation and position data of all camera stations acquired on a particular date establish a data set. calibration values were derived from the collected data and applied to the measurements of the same data set. because the camera had been detached from the mounting frame between collection of differing data sets, no independently derived offset calibration values that were considered suitable to correct orientation and position measurements were available. assuming that the best suitable calibration values are derived from the same data set, the results of accuracy assessment indicate the theoretically highest accuracy achievable. the corrected orientation sensor and dgps measurements were used to provide initial exterior orientation parameters, constrained by the estimated calibration precision, in a bundle adjustment software known as gap [15]. for each data set the gap bundle adjustment was run twice. for the first run no control points were used, relying on the exterior orientation parameters derived from orientation sensor and dgps only. the coordinated points of the test object were used as check points and their coordinates were estimated in the bundle adjustment. in the second run one coordinated point was used as control point with corresponding image point coordinates in only one image. in this bundle adjustment coordinates for the remaining check points were estimated. for both runs the estimated coordinates were compared to the known coordinates of the points, so allowing the calculation of the root mean square error (rmse) for easting, northing, and height to quantify absolute accuracy. relative accuracy was assessed also. 3d distances between all possible pairs of coordinated points were calculated from the check point coordinates estimated in the bundle adjustment. these distances were compared to corresponding distances calculated from the original check point coordinates. the rmse of the distance differences indicates the 3d relative accuracy. 3. testing 3.1 initial test the recording system was initially tested at loughborough university. a metal piece of art located on loughborough university campus was chosen as test object ( figure 21a). the test object is a vertical structure with a small diameter on the ground and is accessible from all sides. it was considered representative for the type of heritage object that was also found at the case study site (section 3.2). on the southern side of the test object 17 points with known ordnance survey national grid (osgb36) coordinates were established. in the lower part that could be reached without auxiliary means (approximately up to 2m) survey targets were used to mark the points. in the upper part of the test object natural points defined by distinctive features, such as corners and intersections of steelwork, were selected. imagery, orientation and position data was collected at 11 camera stations arranged in an arc around the southern side of the test object with an approximate camera-to-object distance of 6m. at this distance some images were acquired with the mounting frame tilted up to 33°, in order to cover the entire height of the test object (approximately 6m). the data collected was processed and analysed using the methods described in section 2.3 and the results can be found in section 4. 3.2 case study the aim of the case study was to test the performance of the recording system at a real heritage site. st. catherine‟s oratory ( figure 21b) on the isle of wight, uk, was chosen as case study test site. st. catherine‟s oratory is an approximately 11m high, octagonal tower built in 1328. it is located in the south of the isle of wight on one of the highest parts of the island [16]. on the eastern side of the tower 22 points with known osgb36 coordinates were established. analogous to the test object at loughborough university, targeted points were used in the lower part and natural points were used in the upper part of the tower. two data sets were collected during the case study. the first data set (ds1) consists of data _______________________________________________________________________________________ geoinformatics ctu fce 2011 188 collected from 12 camera stations arranged in an arc around the eastern side of the tower with an approximate camerato-object distance of 10m. the second data set (ds2) consists of data collected from 12 camera stations arranged in an arc around the eastern side of the tower with an approximate camera-to-object distance of 6m. due to the camera-toobject distance and the height of the tower, the mounting frame was tilted up to 21° in ds1 and 2ř° in ds2 in order to cover the entire height of the tower. each data set was processed and analysed separately using the methods described in section 2.3. the results of the analysis can be found in section 4. figure 21: test object at loughborough university (a) and case study site st. catherine‟s oratory, isle of wight, uk (b). 4. results 4.1 absolute accuracy absolute accuracy quantifies the recording systems capability to provide data for measurements that are accurate in relation to a national coordinate reference system. it is indicated by the rmse of the differences between object coordinates of check points estimated in a gap bundle adjustment and their original coordinates. figure 22 depicts the absolute accuracy achieved in the initial recording system test and in the case study using zero or just one single control point (cp). _______________________________________________________________________________________ geoinformatics ctu fce 2011 189 figure 22: absolute accuracy achieved in recording system test. the best accuracy is achieved in the initial test with values not exceeding 7.0mm. there is no significant difference between using zero or a single control point. the rmse achieved in the case study using no control points is significantly higher than the rmse of the initial test, with values up to 41.2mm in ds1 and 43.7mm in ds2. the accuracy in ds1 and ds2 is enhanced by using a single control point in the gap bundle adjustment. however, the rmse in ds2 (26.0mm) is significantly higher than the rmse in ds1 (5.9mm). the accuracy variations between the three data sets indicate that their direct exterior orientation parameters used in the gap bundle adjustment are of different accuracy. 4.2 relative accuracy the relative or inner accuracy quantifies the recording system capability to provide data for measurements that are accurate in relation to each other. this was assessed by comparing 3d distances between check point coordinates estimated in a gap bundle adjustment with equivalent distances derived from the original coordinates. the rmse of the distance differences indicates the relative accuracy. figure 23 depicts the relative accuracy achieved in the initial recording system test and in the case study using zero or a single control point. figure 23: relative accuracy achieved in recording system tests. the best relative accuracy is achieved for the initial test, with 2.5 mm when zero control points were used. similar to the absolute accuracy, the relative accuracy achieved in the case study is worse than the relative accuracy achieved in the initial test. the relative accuracy achieved is also significantly different between the case study data sets, ds1 and 0 10 20 30 40 50 0cp 1cp 0cp 1cp 0cp 1cp r m s e ( m m ) easting northing height 0 2 4 6 8 10 12 14 16 18 20 0cp 1cp 0cp 1cp 0cp 1cp initial test ds1 ds2 r m s e ( m m ) _______________________________________________________________________________________ geoinformatics ctu fce 2011 190 ds2. when zero control points are used, the rmse increased from ds1 (9.7mm) to ds2 (17.7mm) by 8mm. similar to the results of the absolute accuracy assessment, this indicates accuracy differences between the exterior orientation parameters derived from the three data sets. the utilisation of one single control point seems to have no significant effect on the achievable relative accuracy. 5. smartphone test smartphones with integrated camera and mems-based orientation and positioning sensors have potential to facilitate image-based recording with direct exterior orientation determination. when orientation and position during exposure can be extracted these phones establish off-the-shelf systems that are in principle similar to the recording system presented in this paper. the usability of smartphones for image-based heritage recording was tested using the “htc desire” smartphone. this smartphone integrates a 5 mega pixel camera, a gps antenna, a digital compass and accelerometers [17]. in march 2011 the camera of the smartphone was calibrated and the smartphone used to acquire imagery at a test field established on an outside wall at loughborough university using 22 coordinated points. orientation and position at the time of exposure were extracted using a smartphone application (“imageotag”) that prints the data derived from gps, compass, and accelerometers on a copy of the original image. imagery, orientation and position data was processed and analysed using the methods described in section 2.3. this resulted in indicators for absolute (figure 24a) and relative (figure 24b) accuracy achievable when zero or one single control point is used. the results of the smartphone test are presented using the unit meters (m) instead of the unit millimetres (mm) used for the recording system test results. figure 24: absolute (a) and relative (b) accuracy achieved using a smartphone. figure 24a demonstrates that the smartphone can achieve an absolute accuracy of 1.15m without using control points in the bundle adjustment. when a single control point is used in the bundle adjustment a significant increase in accuracy is only achieved for easting where the rmse drops from 1.04m to 0.68m. using a single control points also improves the relative accuracy (figure 24b). the rmse of the relative accuracy changes from 0.85m achieved when no control point was used to 0.66m when a single control point was used in the gap bundle adjustment. 6. discussion 6.1 performance of the original recording system the results of the absolute accuracy assessment demonstrated that an accuracy level of 44mm can be achieved without control points when suitable exterior orientation parameters are available. with the utilisation of a single control point the absolute accuracy level can be improved to 26mm. as expected, the relative accuracy is better than the absolute accuracy, achieving 18mm without using any control points. the accuracy assessment also revealed significant differences in absolute and relative accuracy between the three data sets. this could be caused by variations in the accuracy of the direct exterior orientation parameters used in the gap bundle adjustment. because the calibration values and exterior orientation parameters were derived from the same data set, the standard deviations of the calibration values are also indicators of the accuracy of the directly measured values from where the exterior orientation parameters were derived. investigating this issue, it was revealed that the standard deviations of the positional offset calibration values varied significantly between the three data sets (table 1). 0 0,2 0,4 0,6 0,8 1 1,2 1,4 easting northing height r m s e ( m ) a 0cp 1cp 0 0,2 0,4 0,6 0,8 1 0cp 1cp r m s e ( m ) b _______________________________________________________________________________________ geoinformatics ctu fce 2011 191 easting (mm) northing (mm) height (mm) initial test 7.86 9.21 9.35 ds1 13.40 14.65 15.64 ds2 24.62 37.57 16.74 table 1: standard deviations of positional offset calibration values. the standard deviations increase from the initial test data set to ds1 and also from ds1 to ds2, demonstrating the decrease in accuracy of the directly measured positions from the initial test to ds2. because the case study standard deviations exceed the expected accuracy of dgps, which is 10mm in plan and 30mm in height [18], the decrease in positioning accuracy is either caused by instability of the recording system components fixture to the mounting frame or by a decrease in dgps accuracy. a decrease in dgps accuracy during data collection at st. catherine‟s oratory could have been caused by tilting the mounting frame for some images, which also tilts the dgps antenna. however, in the initial test, data was collected under similar conditions. further investigations will be conducted in order to identify the reason for the decrease in positioning accuracy. the results of the absolute and relative accuracy assessment were achieved by correcting direct orientation and position measurements using offset calibration values derived from the same data set. therefore, the calibration values are not independently derived and the results indicate only the theoretical accuracy achievable when well suited calibration values are available. after analysis of the data sets presented here, further test data sets were collected that enabled accuracy assessment using independently derived calibration values. preliminary results suggest that the level of accuracy achieved in the tests presented here can also be achieved with independently derived calibration values, when stable offset calibration is maintained. these results will be presented in a future publication. 6.2 performance of a system based upon a smartphone as expected, the accuracy achieved using the “htc desire” smartphone is substantially worse than the accuracy achieved using the developed recording system. the smartphone achieved 1.15m absolute and 0.68m relative accuracy without using control points. this significant difference to the results achieved with the recording system is caused by the smartphone sensor accuracy. the accuracy of the smartphone orientation and position sensors is expected to be lower than the accuracy of the recording system dgps and orientation sensor. no information could be found about the compass and accelerometer accuracy, but the standard deviations derived during offset calibration can be used as indicators for orientation accuracy. here standard deviations for heading, pitch, and roll between 2° and 3° were achieved. the accuracy of the integrated gps can be expected to be no better than the theoretical positioning accuracy of code-based gps, which is 6-10m [18]. this is higher than the displacement that would result from a rotational error of 3° in the exterior orientation rotation parameters at a camera-to-object distance of 10m. therefore, at close-range, the positioning accuracy of the smartphone is probably the limiting factor on the currently achievable absolute accuracy. however, the absolute accuracy achieved in this smartphone test is better than the expected gps positioning accuracy. this can be explained by the offset calibration partly compensating the positional error. similar to the processing and analysis of the recording system data, calibration values and exterior orientation parameters were derived from the same data set. in order to test how well independently derived calibration values can compensate positioning errors, further data collection and analysis will be carried out. 7. conclusion the results presented in this paper demonstrate that an absolute accuracy of 44mm can be achieved with an imagebased recording system combined with direct exterior orientation determination. when a single control point is available for data processing the accuracy can be improved to 26mm. the recording system also achieves relative accuracy levels of 20mm and below. preliminary results derived from further tests have indicated that this accuracy level can also be achieved when independently derived offset calibration values are used. the recording system is therefore believed to be suitable for many cultural heritage recording tasks. when the survey-grade dgps receiver is replaced by a low-cost device for positioning with centimetre accuracy, the recording system will be a proper low-cost system that is suitable for heritage recording by non-specialists. the results of the smartphone test (1.2m absolute and 0.8m relative accuracy) demonstrate that even with well suited calibration values the currently achievable accuracy of the gps positioning does not meet requirements for most cultural heritage recording tasks. however, the usability of smartphones for image-based recording was demonstrated and with in future potentially more accurate integrated orientation and position sensors, smartphones could be used for low-cost heritage recording by non-specialists. _______________________________________________________________________________________ geoinformatics ctu fce 2011 192 8. acknowledgement the authors wish to acknowledge the investment in a tcm5 orientation sensor by english heritage, which made this project possible. thanks are due to the national trust for granting permission to conduct the case study at st. catherine‟s oratory. in addition the authors wish to thank the remote sensing and photogrammetry society (rspsoc) for partly funding the first authors attendance at the cipa 2011 conference in prague. 9. references [1] uzzell, d.l.: introduction: the natural and built environment. in: heritage interpretation, vol. 1, the natural and built environment, london, pinter, 1-14, 1989. [2] herbert, d.t.: preface. in: heritage, tourism and society, london, pinter, xi-xii, 1995. [3] power of place office, english heritage: power of place: the future of the historic environment, london, power of place office, 50 pages, 2000. [4] unesco: convention concerning the protection of the world cultural and natural heritage, http://whc.unesco.org/archive/convention-en.pdf, 2009-05-05. [5] palumbo, g., ogleby, c.l.: heritage at risk and cipa today: a report on the status of heritage documentation, international archives of the photogrammetry, remote sensing and spatial information sciences 2004, 35(2004)b5, 239-842. [6] bosch, r.et al.: non-metric camera calibration and documentation of historical buildings. proceedings of cipa 2005, torino, september 2005, 142-147. [7] chandler, j.h., fryer, j.g.: recording aboriginal rock art using cheap digital cameras and digital photogrammetry, proceedings of cipa 2005, torino, september 2005, 193-198. [8] wackrow, r et al. geometric consistency and stability of consumer -grade digital cameras for accurate spatial measurement. the photogrammetric record 2007, 22(2007)118, 121-134. [9] bryan, p., chandler, j.h.: cost-effective rock-art recording within a non-specialist environment, international archives of the photogrammetry, remote sensing and spatial information sciences 2008, 37(2008)b5, 259-264. [10] niu, x. et al.: directly georeferencing terrestrial imagery using mems-based ins/gnss integrated systems, proceedings of xxiii fig congress 2006, munich, october 2006, 16 pages [11] guarnieri, a. et al.: low cost system: gps/mems for positioning. proceedings of fig working week 2008, stockholm, june 2008, 10 pages. [12] schwieger, v., gläser, a.: possibilities of low cost gps technology for precise geodetic applications. proceedings of fig working week 2005, cairo, april 2005, 16 pages. [13] geneq inc.: sxblue iii. rugged, bluetooth high accuracy l1/l2 rtk-mapping receiver. http://www.sxbluegps.com/sxblue-iii-version1.1.pdf, 2011-01-12. [14] pni corporation: tcm 5. tilt compensated 3-axis compass module, http://www.pnicorp.com/files/ tcm5%20datasheet_05-14-2009.pdf, 2009-05-30. [15] chandler, j. h., clark, j. s.: the archival photogrammetric technique. further applications and development, the photogrammetric record 1992, 14(1992)80, 241-247. [16] english heritage: st. catherine’s oratory | english heritage, http://www.english-heritage.org.uk/ daysout/properties/st-catherines-oratory/, 2011-05-11. [17] htc corporation: htc products htc desire – specification, http://www.htc.com/www/product desire/specification.html, 2011-05-12. [18] konecny, g.: geoinformation. remote sensing, photogrammetry and geographic information systems. london, taylor & francis, 248 pages, 2003. ________________________________________________________________________________ geoinformatics ctu fce 2011 109 spherical panoramas, and non metric images for long range survey, the san barnaba spire, sagrada familia, barcelona, spain elisa cingolani, gabriele fangi università politecnica delle marche – ancona, italy elisacingolani1987@libero.it, g.fangi@univpm.it keywords: spherical photogrammetry, long focal length, non-metric images abstract: the sagrada familia by gaudi in barcelona about 80 years after the death of its creator is going quickly to take its final shape as well as, maintain its original form as gaudi would have wanted, the actual builders say. complicated and elaborated forms, following the construction layout of the chapel of colonia guell in santa coloma, tend to reproduce, on a gigantic scale, the organic forms of trees going to draw the charming and attractive complex of the small church derived from the model of wires used by gaudi for its design. it has been long debated, and still it is debated on this approach as "camouflage", how it is consistent with the attitude of gaudi architecture in the sense that he saw a sort of self-generating form of architecture during its own construction gradually responding to the stress placed by the same growth of structures, shapes, and materials. ("we do not reproduce the forms but we are able to reproduce a character owing its spirit,” a. gaudi). but beyond this, the reality remains of the gradual suppression of what gaudi realized until his death. basically the sole facade of the nativity, with its striking features and ending with four original towers as hyperboloids pinnacles with glittering glazed mosaics, is the only one that was finished by gaudi himself, in particular the san barnaba’s spire. in this action of progressive “destruction”, it is very important to analyze, survey and plot what realized by gaudi for recovering the original forms and keeping them in their gaudian formal and constructive features. the spire of st. barnabas is one of the most architecturally significant occurrence of the whole building and its survey poses major technical problems: their possible solution represented by the experience here shown, has been already experimented in the previous 90 years as one of the first applications of expeditious photogrammetric techniques of survey (clini, fangi,1990). the technical problems consist basically in the difficulty given by its height above the street level, about 100 meters. long focal lenses have to be used to get a suitable resolution and accuracy. we wanted to repeat now the survey using a different photogrammetric technique from the old one, that was dlt algorithm for non-metric images. the new technique is the spherical photogrammetry. multi-image spherical photogrammetry makes use as sensor of a pseudo-image that is the spherical panorama, composed by the images taken from the same station point. for details of spherical photogrammetry see (fangi, 2,3,4,9). a particular procedure appropriate for the orientation of very narrow field of view lenses panorama has been already set up and used for the orientation and plotting of the three minarets of the great mosque of omayyad’s in damascus, syria. their heights range from 60 to 80 meters above the courtyard pavement of the mosque. the technique consists in taking different focal lengths panorama from the same station point (fangi, new castle, 2010), one with wa wide angle and another one with na narrow angle, adding to the stability of wa panorama the resolution of na panorama. the same approach has been used in the sagrada familia, for the survey of san barnaba’s spire. in 1990 the a. made a survey of the same spire. but in comparison to the years 90, there is one difficulty more: now the rear of the spire is not visible because of the construction of the roof of the church, while it was visible in 1990. the solution has been then to use the original images taken in the years 90 for the rear of the spire and the spherical panoramas for the rest, i.e. the part toward the façade, using the original control points. then we had to make a combined adjustment of non metric images using dlt approach and spherical photogrammetry algorithms. the restitution has been indeed carried out using both type of imagery, spherical panorama and non metric images. the results are satisfactory in the sense that the principles of quick photogrammetry have been respected: short surveying times, simple and inexpensive tools, reaching in a ny case suitable outcome. 1. introduction the spherical photogrammetry is a new photogrammetric technique making use not of the original photographic images but of a kind of cartographic projection (spherical panorama) of a virtual sphere where are proj ected the original images, taken from the same point at 360° around. for the details of the technique see (fangi, 2, 3, 4, ř, ř). from the panorama point coordinates the two directions to the corresponding object point are derived. the object space is then obtained by intersection of projective lines coming from two or more oriented panoramas. the successful examples of mailto:elisa.cingolani@libero.it mailto:g.fangi@univpm.it ________________________________________________________________________________ geoinformatics ctu fce 2011 110 survey are already very many. the advantages are very high resolution, fov up to 360°, completeness of documentation, possibility to make quicktime movies, very short surveying times, low-cost equipment. 2. the panoramas with narrow fov in the spherical panorama, the radius of the virtual sphere coincides with the focal length of the camera, or with its principal distance, where the distortions have been already estimated and corrected by the stitching software itself, by overlapping the images. the calibration, that is essentially the determination of the principal distance, takes place closing at 360° the panorama. the closing error ξ is distributed over all the images and the corrected principal distance r’ is got from the original value of the radius r simply by: r’=r(360-ξ)/360° (szeliski, 1997). in the case of narrow or very narrow fov lens it becomes in practice impossible to close at 360° the panorama because of the too many images needed. in this case it is preferable to couple the na pano with one wa pano, taken from the same station point and carry out the selfcalibration with a block adjustment by imposing the geometrical constraint of the coincidence of the two taking points. the wa geometry is a robust one although the resolution is poor, while na pano has a week geometry but a good resolution. this is already a good reason to couple wa panorama with na ones; in addition this combination is necessary for another reason too. 2.1 the set-up of the panorama in the panorama formation phase an essential operation for a good result of the survey, (accuracy of the restitution), is the vertical set-up of the sphere axis, corresponding to the set-up of the theodolite. the remaining lack of verticality can be estimated and corrected by two correction angles in the subsequent orientation phase, but their value should be as small as possible. in fact in our approach the model formation is performed by setting to zero initially the two corrections angles, which are afterwards estimated in the block adjustment. to get the vertical set-up, performed during the panorama formation inside the stitching software, at least two vertical lines in the object space should be observed, and they have to be placed possibly ř0° apart one from the other in the pano. in wa pano, this condition is easily satisfied while in the na pano the imaged zone is normally a small part of the sphere: one can find a vertical line in this zone, in a meridian plane, setting to zero the rotation αx around the axis passing through it (figure 1) but hardly another one ř0° apart from this one, making impossible to set near to zero the other angle αy . a correct αy estimation can be done using the wa panorama: in fact the two zenithal directions zna and zwa (one of wa pano the other of na pano) toward the same object point p must have the same value. one can estimate the difference αy = zwa zna to add to the na zenithal directions to get their correct values. figure 1: the geometry of narrow angle panorama the described technique has been successfully employed in the two examples here described, that are the survey of the three minarets of the great omayyad mosque in damascus, syria and the survey of san barnaba‟s spire in the sagrada familia in barcelona, spain. 3. the great mosque of omayyads in damascus, syria the umayyad mosque of damascus, syria, is one of the greatest and most noticeable mosques in the world (figures 1 and 2). the mosque was built on the site of a christian basilica dedicated to john the baptist since the time of the http://en.wikipedia.org/wiki/john_the_baptist ________________________________________________________________________________ geoinformatics ctu fce 2011 111 roman emperor constantine. the exterior walls of the mosque are still of roman construction. inside the courtyard the walls are covered by beautiful mosaics, the largest ever. the center of the courtyard is occupied by the ablution fountain, at the two opposite corners of the temple two minarets, the so-called jesus minaret on the left (east side) and the qayit bay minaret on the right (west side) (figure 1). in the center of the northern side there is the bride minaret (figure 2). 3.1 the survey in august 2010 the survey was carried out using a camera, canon eos 450d mounted on the spherical head held by a photographic tripod. the survey was limited to the outdoor. the mosque has three minarets, the minaret of the bride, the minaret of jesus, and the minaret of qayt bay. their heights are conspicuous reaching 70 meters above the court pavement. the layout of the panorama network is visible in figure 5. for the photographic takings of the outside of the whole monument some 40 stations have been made, in practice from all the available positions. for the takings of the minarets three types of lenses have been used 28 mm, 50 mm, and 200 mm from the same station points. in the network adjustment we added the constraints for the taking points of different focal lens panorama, to be coincident. we used also the selfcalibration, consisting in the estimation of the camera focal length. in this manner we could estimate correctly the correction angles for the na panorama, as it is described in 2. here we present the restitution of the minarets only, although the whole complex of the exterior surfaces of th walls have been plotted. the plotting of the minarets was successful and the accuracy was satisfactory. in figures 6, 7 some views of the minarets are displayed, in particular of the most conspicuous one, the qayt bay minaret. figure 2: panorama of the front (southern side) of the mosque. on the center the ablution fountain, on the left the minaret of jesus on the right the qayt bay minaret. figure 3: panorama of the entrance (northern) side of the mosque. the bride minaret is visible ________________________________________________________________________________ geoinformatics ctu fce 2011 112 figure 4: corner of the courtyard and the qayt bay figure 5: damascus great mosque, minaret, rendering the orientation network of the panorama survey figures 6, 7: minaret f qaytbay – western-southern side, the view and some details 4. the sagrada familia in barcelone – the survey of san barnaba spire by spherical photogrammetry the san barnaba spire was the only one built by the architect a. gaudi himself, just before his accidental death in 1923. any of the eight spires placed on the two opposite façades, the nativity‟s façade –western sideand the façade of the passion – east side is dedicated to a different saint. the top of the spire has an elevation of about 100 meters above the street pavement. twenty years ago, we made the survey of the spire making use of a camera equipped with 300 mm focal lens. we oriented the camera with dlt approach (1, clini, fangi). we wanted to repeat the experience using the ________________________________________________________________________________ geoinformatics ctu fce 2011 113 spherical photogrammetry. we could make panoramas only from the marina square, because the opposite side, facing south-west, was not visible any more due to the roof of the church, built in the meantime. so we had to use the original images. we made a combined adjustment orientation, non metric images with dlt approach, and spherical panorama, using the original control network and control points. in addition for any station we made three panoramas, with three different focal length, 50mm, 200mm, and 500mm. for the orientation of na panorama we followed the approach described in 2. figure 8: barcelona – sagrada familia, the orientation network – the southern stations are the old ones made in 1990, the 2010 panorama are the northern side. we could make a combined adjustment dlt + pano, putting in the same adjustment block non-metric images and panoramas, using the old control points. figure 9: panorama 50 mm focal length showing the nativity façade the san barnaba‟s spire is the one on the left north ________________________________________________________________________________ geoinformatics ctu fce 2011 114 figure 10: panorama made with 500 mm focal length. the 500 mm panorama complete at 360° would be of the width of 600.000 pixel. the letter b stands for barnaba in figure 9 a panorama made with 50 mm focal length, and in figure 10 a panorama 500 mm focal length, the dif ferent resolution is evident. in a panorama with large scale, very many details are visible, including all the small pieces covering the spire. the size of the shield is 3 meters, the diameter of the balls range from 80, 60 to 50 cm. the top of the spire, covered by glass, shining in the sun, is 17 meters long. it would have been impossible to orient such a panorama without the “help” of wa panorama. 5. conclusions a procedure for the appropriate orientation for long range survey has been developed and tested using the algorithms of spherical photogrammetry. the procedure consists in making from the same station point more than one panorama only using different focal lenses. the views made with telephoto lenses must necessarily be restricted to a limited area of the sphere, have good resolution, however, have an unstable geometry and are difficult to orient, while the panoramas made with wide angle lenses have poor resolution, but good geometry. the coupling between the two types of views can combine the advantages and eliminate the weaknesses of both types. this technique is advantageous in the case of surveys of bell towers, minarets and pinnacles, or long-range objects. it has been also presented a case of the combined adjustment of non metric images, oriented with the dlt algorithm, and self-calibrated spherical panoramas. 6. aknowledgements the a. wish to thank domenico azzarone for the restitution of the san barbaba spire. 7. references [1] p. clini , g. fangi (1991) – two examples of non-conventional photogrammetric techniques: the nativity‟s interior facade and the spire of s.barnaba‟s bell tower in the sagrada familia – barcelona – cipa xiv intern. symp. delphi october 1991, 169-182 [2] g.fangiuna nuova fotogrammetria architettonica con i panorami sferici multimmagine –sifet, symposium arezzo, 27-29 giugno 2007, cd ________________________________________________________________________________ geoinformatics ctu fce 2011 115 [3] g.fangi – the multi-image spherical panoramas as a tool for architectural surveyxxi international cipa symposium, 1-6 october 2007, atene, isprs international archive – vol xxxvi-5/c53 – issn 1682-1750 – cipa archives vol. xxi-2007 issn 0256-1840 pg.311-316 [4] g.fangi (2007) – la fotogrammetria sferica dei mosaici di scena per il rilievo architettonico – bollettino sifet n. 3 2007 pg 23-42 [5] g.fangi, p.clini, f.fiori (2008) – simple and quick digital technique for the safeguard of cultural heritage. the rustem pasha mosque in istanbul – dmach 4 2008 digital media and its applications in cultural heritage 5 6 november, 2008, amman pp 209-217 [6] g.fangi (2008) el levantamiento fotogrametrico de plaza de armas en cuzco por medio de los panoramas esfericos xxx convegno internazionale di americanistica perugia (italia), 6-12 maggio 2008 [7] e.d‟annibale, g.fangi (200ř) –interactive modeling by projection of oriented spherical panorama – 3d-arc‟200ř, 3d virtual reconstruction and visualization of complex architectures – trento 25-29 february 2009isprs archives vol xxxviii-5/w1 : 1682-1777 su cd [8] g.fangi (2009) –further developments of the spherical photogrammetry for cultural heritage – xxii cipa symposium, kyoto, 11-15 october 2009 [9] g.fangi (2010) – multi-scale mult-resolution spherical photogrammetry with long focal lenses for architectural surveys –isprs mid-term symposium newcastle, june 2010 [10] e.d‟annibale, s.massa, g.fangi (2010) photomodeling and point clouds from spherical panorama nabatean architecture in petra, jordanworkshop petra 4-8 november 2010 [11] c. pisa, f. zeppa, g. fangi spherical photogrammetry for cultural heritage san galgano abbey, siena, italy and roman theatre, sabratha, libya acm florence eheritage 2010 25-10-2010 [12] l.barazzetti, g.fangi, f.remondino, m.scaioni, (2010) automation in multi-image spherical photogrammetry for 3d architectural reconstructions the 11th international symposium on virtual reality, archaeology and cultural heritage vast (2010) a. artusi, m. joly-parvex, g. lucet, a. ribes, and d. pitzalis (editors) [13] r. szeliski and h. shum. (1997) creating full view panoramic image mosaics and environment maps. in proc. of siggraph, pages 251-258 [14] www.cipa.icomos.org/objectives figure 11: the 3d model of the spire (plotting by d.azzarone) http://www.cipa.icomos.org/objectives ________________________________________________________________________________ geoinformatics ctu fce 2011 116 figure 12: the 3d model of the spire (plotting by d.azzarone) ________________________________________________________________________________ geoinformatics ctu fce 2011 117 figure 13: the 3d model of the spire (plotting by d.azzarone) ___________________________________________________________________________________________________________ geoinformatics ctu fce 228 “geoheritage” gis based application for movable heritage albina moscicka institute of geodesy and cartography, cartography department 27, modzelewskiego str., 02-679 warsaw, poland albina.moscicka@igik.edu.pl keywords: digital heritage, web gis, archives, public access, on-line map abstract: the paper will present the results of a research project „a methodology for mapping movable heritage”. this project, financed by the polish ministry of science and higher education in 2008-2010, was realized by the institute of geodesy and cartography in cooperation with the research and academic computer network (portal polska.pl), the central archives of historical records and department of art history of the wroclaw university. the idea of the project was to simplify access to digital movable cultural heritage by the use of spatial information. the main aspect of the project was to use a geographic information system (gis) as a technology and as a tool to integrate different digital collections, present their content in one space and provide online access to them from one common level – from an online map. the essence of the research was to present on the online map movable monument as multi-spatial object. the base of this assumption is that most of monuments, especially movable ones, can have several places in the geographical space that are connected with them (several various space relations). as a rule archival documents were created in one place, describe the other, today can be kept in places far away from the place they were prepared, and what more the parts of the same collection can be kept in different archives. moreover, one single document can be connected or have relations (typological, thematically, temporal, spatial) with other relations to the same or the other one. the reason for it is that documents concerning various places are housed in the same archive, various documents can present the same place or the place of creating particular document can be the place of housing another. in the project the basic source material was digital collections of original records. their metadata defined in the international standards of monuments’ description were used for connecting digital monuments with the geographic space. with the use of these standards, the internet application for presenting cultural heritage on the map was developed. it can be found at www.geoheritage.polska.pl (polish version) and www.geoheritage.poland.pl (english version). the application is based on the geographic information system (gis), and its functionality is mainly the elements of selecting the resources, presenting the documents on the map in different ways and finding their images. the paper will present methodological solutions necessary for building on-line map of movable heritage together with the functionality of the application. 1. objectives movable monuments are these part of the heritage, which has no simple relation to the geographic space. immovable monuments (churches, palaces, castles etc.) have unchangeable geographical localisation – they are located in one precisely defined place in the space: in some city/village, on some street etc. accuracy of determination of this localisation depends only on our needs. movable monuments (archival maps, manuscripts, old prints, works of art etc.) are the monuments that are not related to one place in the space – they can be easily moved from one place to another, so their relation to the geographic space can be changeable in time. most of the movable monuments are created in one place, in the past they could be kept in different places, and now are stored in other place (archive, museum or library). what more, such monument, e.g. a written document (manuscript, old print) can refer to other places in geographical space (it can describe different places). such situation is especially common on the territory of europe – because of their rich history and related to this people migration, borders changes, exportation of cultural goods etc. what more, it is difficult (in most cases it is impossible at all) to know these localisations with the same accuracy as in case of immovable monuments. the starting point of the research was that polish archives are spread all over poland and also all over the world. as a rule archival documents were created in one place, describe the other, and today can be kept in places far away from the place they were prepared. what more, the parts of the same collection can be kept in different archives. complicated polish history caused that it is necessary to look, e.g. for the plan of wieliczka (near krakow) in gdansk (more than 500 km north of krakow), and the plan of zabludow (near bialystok) at the warsaw university of technology (more than 200 km west of bialystok). a lot of polish cultural goods can be find in other countries. in connection with it the authors have made an attempt to work out the idea of geovisualization information on movable monuments (moscicka and marzec, 2008a, 2008b). the main task was to develop the system of geographical http://www.geoheritage.polska.pl/ http://www.geoheritage.poland.pl/ ___________________________________________________________________________________________________________ geoinformatics ctu fce 229 information (gis) on movable historical monuments and visualization of data gathered in it on the on-line map in the way that allows searching and finding movable monuments using geographical information. crucial element of new presentation was also a timeline, integrated with the application that allows selecting historic and artistic periods, and defining the time period of our search. the task was complicated because so far this kind of research has not been undertaken. there are neither polish nor international sources that describe this problem and the way of its implementation. the functionality of the on-line map was developed on the basis of the results of the research of internet users‟ needs and consultations with experts. the main aim was to meet users‟ expectations. further research of their needs that will help us to define the directions of application development; it will be conducted on larger scale, using the pilot application. 2. methodology the essence of the research is to present on the on-line map a movable monument as multi-spatial object (fig. 1). the basis of this assumption is that most of monuments, especially movable ones, can have several places in the geographical space that are connected with them (several various space relationships). in our research we define “place” as a city or village, because in the most cases, there is no possible to determine more precisely e.g. place of creation. these places can be: the place where the monument was created; the place or places where the monument was housed in the past; the place where the monument is kept recently; the place or places connected with the monument thematically – in case of maps it is a part of space presented on them. moreover, one single document can be connected or have relationship (typological, thematically, temporal, spatial) with other document. the reason for it is that documents concerning various places are housed in the same archive, various documents can present the same place or the place of creating particular document can be the place of housing another. in the project the basic source material was digital collections of original records. these high quality scans were the material to prepare electronic documents presenting monuments called the digital copy of the monuments. moreover, one single document can be connected or have relationship (typological, thematically, temporal, spatial) with other document. the reason for it is that documents concerning various places are housed in the same archive, various documents can present the same place or the place of creating particular document can be the place of housing another. in the project the basic source material was digital collections of original records. these high quality scans were the material to prepare electronic documents presenting monuments called the digital copy of the monuments. figure 1: multi-spatial object few space relations of one monument as a source data in the project digital copies of the movable monuments were used. the digital copies of the monuments are described by means of metadata that facilitates searching, controlling, understanding and managing them (ordinance, 2006). the metadata are defined in the international standards of monuments‟ description. it describes ___________________________________________________________________________________________________________ geoinformatics ctu fce 230 which elements can or should be used in the electronic description of the copy of the monument to get basic characteristics of the monument and meet requirements for electronic documents. moreover, such descriptions are used by archivists or historians, so they are understandable for all data deliverers, gis creator and the final users. in the project, the digital copies of the movable monuments together with their metadata prepared in the international standards for monument description were used. these standards, beside basic information identifying a monument, contain also data that enables to link it with the geographical space. moreover, they also contain data which allow to define relations between objects. as the source collections two types of resources were used: part of collection “cities in archival documents”, presented so far in the portal polska.pl; more than 100 written documents, described in xml files in the ead standard (meissner, 2002; url; wajs, 2000, 2003) were prepared; works of art from the church in zorawina (dolnoslaskie voivodship); almost 100 works of art, described in xml files in the objectid standard were prepared. as the background cartographic data digital map of poland in scale 1:200 000, developed at the institute of geodesy and cartography, warsaw, was used. together with the descriptions of the above monuments, dictionary of historical and artistic periods, types of monuments, institutions and people were created. there was also prepared the dictionary of geographic places connected with the monuments. in this dictionary, connections with the places used in monuments descriptions and coordinates of these places were defined. coordinates were determined in the coordinate system of the digital map of poland applied. use of dictionaries allows to connect monuments with the geographic space and, consequently, to build geographic information system about movable heritage. with the use of standards of describing the monuments, the pilot internet application for presenting cultural heritage on the map was developed. it is based on the gis, and its functionality is mainly the elements of selecting the resources, presenting the documents on the map in different ways and finding their images. the result of the research is the solution for presenting in one common surface – in one application – not only information about places connected with the movable monuments, but also their images. monuments are presented in such way that enables studying historical objects in comparable degree as during the visit in archives (wajs and marzec, 2009). 3. results the result of the project discussed is a web application for the presentation of the heritage on the map. it is accessible in the internet at the www.geoheritage.polska.pl (polish version) and www.geoheritage.poland.pl (english version). the start page of the application is presented in figure 2. figure 2: start page of the “geoheritage” application file:///j:\geoinformatics\igik_issue\www.geoheritage.polska.pl file:///j:\geoinformatics\igik_issue\www.geoheritage.poland.pl ___________________________________________________________________________________________________________ geoinformatics ctu fce 231 the elements of selecting the resources, presenting the documents on the map in different ways as well as finding their descriptions and images reflect the main functionality of the application. the major parts of the application developed are: the main window – map window, where monuments are presented, together with the tools necessary for zooming and moving map; the top menu with the additional tools as history of presented maps, link to the present map view and icon for printing them, and – very useful – full screen option; interactive timeline, based on time of creation resources presented on the map; the left menu with the main tools for searching and presenting thematic data. in the left menu, first of all, there are located tools for selecting the resources. searching the resources is possible at two levels. easy search contains text searching of all elements of monument description based on entered word, i.e. searching in the name or comments on the monument. advanced search contains both: elements of easy search and also gives possibility to extend the selection criteria – to the type of the monuments, dates or creation period, institution of storing, and also to selection of geographic criteria. resources can be searched by dates or period of creation. there are two types of periods: historical periods – based on polish historical events, and artistic periods – based on styles in art. moreover, this search is linked with the timeline. by choosing a period from the list or by entering the dates in the menu, one automatically changes the timeline settings. this feature works both ways: timeline is interactive and is divided into years, historical and artistic periods. it is possible to choose historical or artistic period by clicking in a colourful line that symbolizes exact period, as far as moving arrows pointing years on the time scale. any change on timeline, as far as using search menu, results in reducing or extending the amount of monuments presented on the map. searching using geographic criteria is possible by choosing name of a place and also by choosing an administrative unit (on different level). what is more important, it is possible to select what kind of places we would like to find: place of creation, storage place or places which are connected with the document topic. below the search tool list of monuments is located. it is a list of all objects currently presented on the map. the resource presented on the map can be enriched by choosing the view with historical borders of poland and visualize the historical context of creation or storage the resource in the selected period. below the list of monuments, one of the most important tool – a tool for changing thematic content of the map, is located. it was assumed that the way of presentation resources on the map depends on users‟ needs or interests. so, the tool for changing aspect of presentation resources, which allows to show monuments by: creation places, described places and storage places is provided. each of them works as a separate independent thematic layer. what is the most important, each of them is based on the same set of monuments but present them in different spatial aspect. it is possible to move between layers and change the aspect of presentation on the map of the same set of monuments in any time. the functionality of the application is also connected with the interactivity of the map. the icons that appear on the map and symbolize the places connected with the monuments are active. when the user move a mouse cursor on them – the number of monuments in chosen place will be shown, together with the name of place or administrative unit to which resources symbolized by the icon are related. when the user clicks on the icon, the menu with the additional options related to changing thematic map content or aspect of resources presentation, will be shown. there is an option which shows all (or selected) monuments created in pointed place. if this option was chosen, the list of chosen monuments will be presented in a new window. these chosen monuments are listed in a new window and they can be presented on the map as the new collection – in such case monuments‟ list will be replaced with the selected monuments‟ list and new collection will be presented on the map. if one object from the list was selected, its image and its description will be shown in a new window. functionality presented so far is related to a set of monuments. in the application there are also solutions for the presentation of spatial information about one single object. by choosing one object from the list of monuments all places connected with that monument – the place of its creation, its storage place and – if they appear – places concerning its topic – are presented on the map. in some cases there is possible to present on the map additional spatial information about topics connected with the presented objects, i.e. places of activity of this monument‟s author. as in the case of presentation monuments‟ collections, the icons presenting spatial information about one single object are also active. when the user clicks on them he/she will see menu from which the other options can be chosen. there are the same options as in case of icons representing a set of monuments, as well as, e.g. possibility of selecting monuments connected with the chosen one (i.e. monuments from the same collection). from this one single object icons' menu the users can also go to monument description and to its image. because monuments described with the use of international standards were used as the source data for monuments description, the scope of the information about each object is the same as in the standard (ead or object id). the way of image presentation was one of the most important aspects in the project. the authors assumed that the user should be able to study archival documents in comparable degree as during the visit in archives. thus, erez imaging server is used for the presentation of objects. ___________________________________________________________________________________________________________ geoinformatics ctu fce 232 4. conclusion planning stage and also research on behaviour and needs of users were very important parts of the project. they allowed to define the functionality of the application. the element that ensures effectiveness of the works is the implementation of modern information technology. also truly important is to ensure enough time for testing the final version of the application before its implementation. applying the international standards of monument descriptions is the only way to develop, promote, exchange and enlarge the database. the key problem in developing such initiatives is unwillingness to share the resources (also the digital copies) and low level of digitalization of polish archives. close cooperation between the experts from different institutions was one of the most important factor that enabled the realisation of the project. it ensured an interdisciplinary approach to the problem and professionalism in every stage. the authors have not known other (the same or similar) solutions. no information on similar solution has been found in the specialist literature. the application developed by the authors is the only one that presents this topic in such complex way. 5. references [1] meissner, d., (ed.): rlg best practice guidelines for encoded archival description (translation: h. wajs, agad), research libraries group, california, usa, 2002, [2] moscicka, a., marzec, m.: methodology for mapping movable heritage assumptions of the project (in polish), conference “digital meeting with monuments the status and development prospects of contemporary methodology”, wroclaw, poland, 19 september 2008, [3] moscicka, a., marzec, m.,: the advantages of geovisualization (in polish), nask review 3,warsaw, poland, 2008 [4] ordinance of the minister of internal affairs and administration: structure of the necessary elements of electronic documents (in polish), journal of laws, no 2006, pos. 1517, warsaw, poland, 30 october 2006 [5] http://www.dlib.indiana.edu/services/metadata/activities/eadmanual.pdf [6] wajs, h.: ead basic information (in polish), 2000, http://www.agad.archiwa.gov.pl/ead/ead.ppt [7] wajs, h.: polish road to the standardization of archival description (in polish), digital archives, e. rosowska (ed.), warsaw, poland, 2003 [8] wajs, h., marzec, m.: dynasty on the internet (in polish), nask review 2/2009, warsaw, poland http://www.dlib.indiana.edu/services/metadata/activities/eadmanual.pdf http://www.agad.archiwa.gov.pl/ead/ead.ppt application of computer vision methods and algorithms in documentation of cultural heritage david káňa1, vlastimil hanzl2 1geodis brno, ltd lazaretní 11a, brno, czech republic dkana@geodis.cz 2brno university of technology, faculty of civil engineering veveří 331/98, brno, czech republic hanzl.v@fce.vutbr.cz abstract the main task of this paper is to describe methods and algorithms used in computer vision for fully automatic reconstruction of exterior orientation in ordered and unordered sets of images captured by digital calibrated cameras without prior informations about camera positions or scene structure. attention will be paid to the sift interest operator for finding key points clearly describing the image areas with respect to scale and rotation, so that these areas could be compared to the regions in other images. there will also be discussed methods of matching key points, calculation of the relative orientation and strategy of linking sub-models to estimate the parameters entering complex bundle adjustment. the paper also compares the results achieved with above system with the results obtained by standard photogrammetric methods in processing of project documentation for reconstruction of the žinkovy castle. keywords: computer vision, interest operator, matching 1. introduction images captured by digital cameras are one of the most important form of information in documentation of cultural heritage. effective assignment of camera pose in space is necessary for consequential usage for measuring purposes. the automatic process of finding exterior orientation can be divided to three main tasks: key point finding and matching, relative orientation and bundle adjustment. our paper presents practical experiment of such procedure. 2. key points extraction 2.1. sift – scale invariant feature transform the initial phase during comparing and relative orientation of two images is to choose characteristic or key points in images. the key point should by no mean characterize the image area so that this area could be reliably found and compared with the same area in different image. by finding corresponding points in both images a correspondent couple (correspondence) is defined. for detection and comparison of significant points in the image sift operator was geoinformatics fce ctu 9, 2012 27 káňa, d., hanzl v.: application of computer vision methods . . . chosen as the most convenient detector. this detector is unlike simple correlation between two areas in the images with the size of for example 21x 21 pixels partly invariant toward view geometry change therefore rotation (circa 15 degrees), scale change and is partly invariant to noise as well. sift detector is based on searching for extremities in the images by finding differences among images incurred by the convolution of image function i(x,y) and gauss filter g(x,y,δ) with variable values of sigma. d(x,y,δ) = l(x,y, kδ) −l(x,y,δ) = (g(x,y, kδ) −g(x,y,δ)) ∗ i(x,y). (1) exact procedure is described for example in [2]. figure 1: blurred images and their differences 2.2. finding extremities single images with a different degree of blurring are subtracted each other and difference imagess would come into being. these differences are evidently approximation of the second derivation of the image function i(x,y) and serve to detect local extremities. after creation of differential images (fig. 1) each pixel value is compared with six neighbouring pixels in the image and nine neighbouring pixels in the image with blurring partly a level higher and partly a level lower. if the value of the tested pixel is the lowest or eventually the highest out of all the neighbouring pixels, this pixel is chosen as possible key point. once the candidate for the key point is found by comparison with its neighbours it is necessary to decide about its stability and therefore about possible denial on the bases of information about its location, scale and a rate of main curvatures. this information enables effective removal of nondesired points in low contrast areas. for each point is its surrounding in the range of 3x3 pixels approximated by the three dimensional quadratic function. consequently is found maximum or eventually minimum of this function that defines the exact location of the key point with subpixel accuracy. the points along the edges are according to curvature diameters rate in two perpendicular directions removed as well. geoinformatics fce ctu 9, 2012 28 káňa, d., hanzl v.: application of computer vision methods . . . 2.3. orientation assignment a certain orientation can be assigned to each key point. by this step is ensured key point descriptor invariance toward the image rotation. descriptor is expressed relatively in the view of key point rotation. orientation is computed in dependence of smoothing out rate for given key point and does not depend on scale. in blurred image size values m(x,y) and orientation values θ(x,y) of the function gradient l(x,y) are computed. consequently orientations in the surrounding of the key point are computed, when an orientation histogram of neighbouring pixels that contains 36 items for 360 degrees range with 10 degrees interval is put together. in this histogram a peak is found (an item with biggest occurrence) and this dominant rotation is assigned to the key point. farther it is tested whether other occurrences reaching 80 percent of the biggest occurrence are present. if so a new key point is made out with the same coordinate but with a new orientation given by this direction. after all in some places a rank of key points can ensue with the same coordinate but with various orientations. 2.4. key point descriptor the parameters defining the location, scale and orientation of the key points also define auxiliary coordinate system, to which descriptor can be expressed clearly describing the area around key point location. descriptor is based on image gradients and their orientation in a region surrounding key point, where area 16 x 16 pixels is divided into 16 blocks of 4 x 4 pixels. for every block is created histogram of eight meaningful orientations weighted by gradient magnitude using gaussian function. subsequently by ordering and normalizing those 16 partial histograms containing 8 intervals descriptor, vector of dimension 128 is formed. for key points detection an implementation [1] was used, for another acceleration of computing an implementation with usage of hardware acceleration on cuda [4] platform would be tested. 3. finding correspondences if there are detected key points in each image counting descriptors, we can approach to pairing and finding of corresponding point couples correspondences, which came into being by projection of point in three dimensional space to both images. the rate of agreement of the two key points is determined on the basis of their sift descriptor vector’s euclidean distance in two ways. because we do not have any information about image sequence it is necessary to compare them by each to each method. 3.1. symmetric pairing to every key point in the first image we can find a point with the nearest descriptor distance in the second image. in the second image is to each key point found the nearest key point in the first image as well. as potential pair of corresponding points is labeled the one where mutual agreement is in both comparisons. geoinformatics fce ctu 9, 2012 29 káňa, d., hanzl v.: application of computer vision methods . . . figure 2: key points and their orientations 3.2. distance ratio test lowe [2] recommends testifying the agreement of the key points by rate of descriptor euclidean distance to the first and second nearest point. as limiting he offers distance ratio of 0.6. it is obvious that this criterion can fulfill more key points. for correspondence’s stability and reliability reasons it is convenient not to count on these multiple correspondences in farther calculations. this test proofed as not much convenient with markedly repeating texture (for example the facade of panel house images). 4. fundamental and essential matrix the correspondences set obtained by above mentioned procedures is however loaded with errors and false correspondences, which arouse from a change of camera location, change of lighting, digital image noise and so on. these false correspondences could be eliminated by usage of geometric criterion – epipolar condition. the x point together with projection centers c and c’ forms epipolar plane in 3d space. by epipolar plane intesection with projection planes are formed epipolar lines – epipolars. and points x and x’, which are projections of x point into projection plane, lies on this epipolar lines. this lines also pass thru epipoles e and e’ where epipole is projection of first camera projection center to the projection plane of the second camera. algebraic formulation of the epipolar condition is equation (2): geoinformatics fce ctu 9, 2012 30 káňa, d., hanzl v.: application of computer vision methods . . . x c c'e e' x x' � figure 3: illustration of epipolar geometry [ x′ y′ 1 ] f  xy 1   = 0 (2) where f is fundamental matrix size is 3 x 3 and rank of this matrix is 2. this fundamental matrix defines relation between two cameras without dependency on scene structure. for calculation it is not necessary to know cameras interior orientation parameters. 4.1. fundamental matrix computation using correspondences matrix f has seven degrees of freedom so minimal number of correspondences is seven. number of solutions can vary from one to three. due to the numeric stability is more optimal to use eight-point algorithm and find the best solution using svd decomposition. formula (2) defines relation between two corresponding points. providing f: f =  f11 f12 f13f21 f22 f23 f31 f32 f33   (3) we can obtain one linear equation for each correspondence: x′x f11 + x′y f12 + x′ f13 + y′x f21 + y′y f22 + y′ f23 + x f31 + y f32 + f33 = 0 (4) for n correspondences we obtain homogeneous system of linear equations in following form: af =   x′1x1 x ′ 1y1 x ′ 1 y ′ 1x1 y ′ 1y1 y ′ 1 x1 y1 1 ... ... ... ... ... ... ... ... ... x′nxn x ′ nyn x ′ n y ′ nxn y ′ nyn y ′ n xn yn 1   f = 0 (5) solution which minimizes distances of projected points to epipolar lines is such a vector f where ‖ax‖ is minimal and ‖f‖ = 1. geoinformatics fce ctu 9, 2012 31 káňa, d., hanzl v.: application of computer vision methods . . . if udvt is svd decomposition of matrix a, the solution is vector corresponding to the smallest a matrix singular value which is the last column of v matrix. to meet real fundamentality, the f matrix should have rank of 2. due to noise and small inaccuracies, computed matrix has usually rank of 3. for conversion to rank of 2 svd decomposition is used again, where last eigenvalue is set to zero and matrixes formed by decomposition are multiplied. f = udvt d =  d11 0 00 d22 0 0 0 d33   d̄ =  d11 0 00 d22 0 0 0 0   f̄ = ud̄v t (6) out of numeric stability reason it is good to reduce input pixel coordinates towards the center of gravity and normalize. for successful result it is necessary that 3d points which projections are used for computation of the fundamental matrix must not lie in one plane. described eight point algorithm is linear, nonlinear solution can be found for example in [5]. 4.2. ransac – random saample consensus for key points selection meeting epipolar condition the ransac algorithm was used. this algorithm enables to find the best solution suiting given model iteratively. in our case x’ f x = 0. comparing to results obtained by using least mean squares (lms) method which doesn’t take into account blunders and mistakes, results obtained using ransac are more consistent. model (fundamental matrix) is computed in every iteration from randomly selected sample formed by 8 pairs of corresponding key points. consequently the rest of this points not contained in selected sample is tested against computed model. if pre-specified percentage of key points meets the model parameters and total model error is at the same time smaller than error obtained in previous iteration, given selection is marked as the best obtained selection. in other case is computed model rejected. the computation can be terminated after a specified number of iterations or after reaching the minimal threshold value of error from the statistically significant part of the correspondences. the last step is to determine the fundamental matrix using the svd method from the best selection (5, 6). 5. essential matrix and relative orientation provided the fundamental matrix f is computed and calibration matrixes of both cameras are known it is possible to formulate equation for essential matrix computation which defines relative position of cameras regardless of scale. e = k′t f k k =  f 0 px0 f py 0 0 1   (7) if only relative orientation is solved it is possible to set up first camera projection center into coordinate system origin and its rotation matrix as identity matrix. projection matrix of the first camera can be formulated: p = [i | 0] and for the second camera: p ′ = [r | t] where r is rotation matrix and t is translation vector regardless the scale. geoinformatics fce ctu 9, 2012 32 káňa, d., hanzl v.: application of computer vision methods . . . e = [t] ×r rotation and translation we can determine by svd decomposition of the essential matrix e in following way: w =   0 −1 01 0 0 0 0 1   z =   0 1 0−1 0 0 0 0 0   (8) u = sr = u diag(1, 1, 0) v t where s = uzut r = uwvt or r = uwt v t . mentioned decomposition has four possible solutions. valid solution is selected by projection depth testing, where projection depth is computed for any point and checked whether it is positive for both cameras. relative orientation is computed for any image pair combination containing specified minimal number of correspondences. as sufficient number regarding noise appears to be 16 correspondences. in the case of unordered collection of images where no prior parameters are known, the images are compared each to each and the total number of comparisons is o(n2), where n means number of images. in our practical experiment we selected 25 images and 300 possible combinations were tested. for larger images sets is convenient to use parallel computations. in case of image sequence is suitable due to computation complexity limit the number of compared images to 3 – 5. 6. sparse bundle adjustment based on the computed relative orientations between single images we can build an approximate scene model in relative coordinates that serve as an estimate of input parameters entering into complex bundle adjustment. as initializing pair is selected a pair of images with the largest number of correspondences (inliers). the objective of the bundle adjustment is the reprojection error minimalization, when the corrections are assigned both to three dimensional points and parameters of outer or possibly inner orientation of single cameras. min n∑ i=1 m∑ j=1 d(pj (xi), xij)2 (9) where d(pj (xi), xij)2 marks the square of euclidean distance between the predicted projection pj of the point xi in three dimensional global coordinate system and real projection xij to image j. the base of sba algorithm (sparse bundle adjustment) is implementation of nonlinear levenberg-marquart algorithm where local function minimum is sought for point x projection in global coordinates into the image coordinates using a combination of gauss-newton method and the method of the steepest descent. if the input estimate of parameters is too far from the f function local minimum, the algorithm behavior is similar to the steepest descent method which guarantees at least slow convergence. in the case of moving towards local minimum gauss-newton method is used to guarantee fast convergence. geoinformatics fce ctu 9, 2012 33 káňa, d., hanzl v.: application of computer vision methods . . . the main advantage of the sba software package [7] is the usage of optimized memory structures for sparse matrices storing arising from the normal form equations. without the use of sparse matrices the solution would be (when having hundreds of thousands unknown images) out of extreme memory and computation cost almost impossible. 7. experimental results the input selection contained 25 images of žinkovy castle facade (fig. 5) shot by digital camera olympus, the computations were performed on a standard pc with intel core 2 duo processor. algorithms were implemented in c/c++ language. based on the results of the lens calibration the distorsions were neglected. the original images of size 3648 x 2736 pixels were modified due memory cost of key points computation to 1204 x 903 size. figure 4: input dataset: 25images, size 1204x903 pixels computation sift descriptors took 424 seconds, average 17 seconds per image and 485536 were detected. key points pairing for 300 image combinations took 900 seconds. finally 43052 three-dimensional points entered bundle adjustment. the results of the bundle adjustment were the absolute orientations of images in a relative coordinate system, which was defined by the first pair of images. the coordinates of projection centers were also independently determined in intergraph isat software using ground control points in geodetic system. to compare the accuracy of automatic orientation, transformation geoinformatics fce ctu 9, 2012 34 káňa, d., hanzl v.: application of computer vision methods . . . figure 5: reconstructed projection centers key and coordinate differences between projection centers in both systems were also computed. image dx dy dz p6027864 0.28 0.13 -0.09 p6027865 0.19 -0.01 0.03 p6027866 0.17 -0.01 0.08 p6027867 -0.24 -0.05 0.09 p6027869 0.59 0.09 -0.09 p6027870 0.56 -0.07 0.02 p6027871 1.23 0.00 0.44 p6027872 0.92 0.03 -0.01 p6027873 -0.62 -0.09 -0.26 p6027874 -0.57 -0.25 0.01 p6027875 -0.55 0.08 0.14 p6027876 -0.95 0.01 -0.15 p6027877 -0.41 0.02 -0.15 p6027878 -0.49 0.25 0.22 p6027879 -0.36 0.18 0.48 continued on next page geoinformatics fce ctu 9, 2012 35 káňa, d., hanzl v.: application of computer vision methods . . . continued from previous page image dx dy dz p6027880 -0.35 -0.30 -1.02 p6027881 -0.14 -0.03 -0.03 p6027882 -0.25 -0.08 0.17 p6027883 -0.41 0.04 0.30 p6027884 -0.45 -0.02 -0.53 p6027885 0.19 0.05 0.04 p6027886 0.24 0.05 0.30 p6027887 0.42 0.11 0.55 p6027888 0.41 0.01 -0.51 p6027889 0.57 -0.12 -0.04 table 1: differences between coordinates of projection centers computed in intergraph isat and coordinates obtained by automatic process 8. summary by above mentioned methods the absolute orientations of projection centers in a relative coordinate system using image descriptors were fully automatically determined. parameters obtained in this way can be easily transformed into geodetic system using ground control points and ideally these parameters can directly serve as input to other tasks such as generating of orthogonalized mosaic of facade images. in less ideal case we can obtain the input estimates of parameters for another advanced calculations and decrease time and work difficulty of the absolute orientation procedure. inaccuracies can be attributed to both numerical instability of algorithms and the insufficient calibration of the lens. other options for achieving better results: 1. using of gpu for reducing time complexity 2. selection of key points only in a particular part of images (gruber areas). references [1] lowe, d.: sift demo implementation. http://www.cs.ubc.ca/~lowe/keypoints/ [2] lowe, d.: distinctive image features from scale-invariant keypoints. international journal of computer vision, 60, 2 (2004), p. 91-110. [3] harris, c. g.; stephens, m. j.: a combined corner and edge detector. proceeding fourth alvey vision conference, 1988, p. 147 151. [4] changchang wu; sift on gpu. university of north carolina at chapel hill, http: //www.cs.unc.edu/~ccwu/siftgpu/ [5] hartley, r.i.; zisserman, a.: multiple view geometry in computer vision. cambridge university press, june 2000. geoinformatics fce ctu 9, 2012 36 http://www.cs.ubc.ca/~lowe/keypoints/ http://www.cs.unc.edu/~ccwu/siftgpu/ http://www.cs.unc.edu/~ccwu/siftgpu/ káňa, d., hanzl v.: application of computer vision methods . . . [6] hartley, r.i.: an investigation of the essential matrix. 1993. http://users.rsise.anu. edu.au/~hartley/papers/q/q.pdf [7] manolis i. a. lourakis; antonis a. argyros, sba: a software package for generic sparse bundle adjustment, acm transactions on mathematical software (toms), v.36 n.1, march 2009, p.1-30, [8] brown, m.; szeliski, r.; winder, s.: multi-image matching using multi-scale oriented patches. proc. int. conf. on computer vision and pattern recognition, san diego, 2005, p.510-517. [9] b. triggs, p. mclauchlan, r. hartley, and a. fitzgibbon. bundle adjustment -a modern synthesis. vision algorithms: theory and practice, pages 298–372, 1999. [10] manolis i. a. lourakis: a brief description of the levenberg-marquardt algorithm by levmar, http://www.ics.forth.gr/lourakis/levmar/levmar.pdf, july 2004 [11] hartley r.i.: in defence of the 8-point algorithm, iccv, pp.1064, fifth international conference on computer vision (iccv'95), 1995 geoinformatics fce ctu 9, 2012 37 http://users.rsise.anu.edu.au/~hartley/papers/q/q.pdf http://users.rsise.anu.edu.au/~hartley/papers/q/q.pdf http://www.ics.forth.gr/lourakis/levmar/levmar.pdf geoinformatics fce ctu 9, 2012 38 ________________________________________________________________________________ geoinformatics ctu fce 2011 323 3dmadmac|automated: synergistic hardware and software solution for automated 3d digitization of cultural heritage objects robert sitnik1, maciej karaszewski1, wojciech załuski1, eryk bunsch2 1warsaw university of technology, faculty of mechatronics sw. andrzeja boboli 8, 02-525 warsaw, poland r.sitnik@mchtr.pw.edu.pl 2the wilanow palace museum stanislawa kostki potockiego 10/16, 02-958 warsaw, poland ebunsch@muzeum-wilanow.pl keywords: 3d shape measurement, structured light, next best view, cultural heritage 3d digitization, automated data processing abstract: in this article a fully automated 3d shape measurement system and data processing algorithms are presented. main purpose of this system is to automatically (without any user intervention) and rapidly (at least ten times faster than manual measurement) digitize whole object’s surface with some limitations to its properties: maximum measurement volume is described as a cylinder with 2,8m height and 0,6m radius, maximum object's weight is 2 tons. measurement head is automatically calibrated by the system for chosen working volume (from 120mm x 80mm x 60mm and ends up to 1,2m x 0,8m x 0,6m). positioning of measurement head in relation to measured object is realized by computer-controlled manipulator. the system is equipped with two independent collision detection modules to prevent damaging measured object with moving sensor’s head. measurement process is divided into three steps. first step is used for locating any part of object’s surface in assumed measurement volume. second step is related to calculation of "next best view" position of measurement head on the base of existing 3d scans. finally small holes in measured 3d surface are detected and measured. all 3d data processing (filtering, icp based fitting and final views integration) is performed automatically. final 3d model is created on the base of user specified parameters like accuracy of surface representation and/or density of surface sampling. in the last section of the paper, exemplary measurement result of two objects: biscuit (from the collection of museum palace at wilanów) and roman votive altar (lower moesia, ii-iii ad) are presented. 1. introduction in recent years, many cultural institutions (museums, galleries etc) reached the conclusion that their presence in digital community is insufficient. the trend of making parts of collections available via internet is rapidly growing, aided by excellent and still more and more affordable digitization hardware. existing realizations are various, ranging from simple photos and two-dimensional scans, through three-dimensionally mapped photos (i.e. google‟s art project [1] with contributors like national gallery london, palace of versailles or van gogh museum) to full three -dimensional presentations (versailles palace [2], the khufu pyramid [3], j. iwaszkiewicz‟s stawisko [4], fryderyk chopin‟s piano [5]) with fully digitized objects presented as 3d models viewable from any direction. multimedia presentations are not sole purpose of digitization – the resolution and accuracy of used measurement devices become so high it allows for creating relics‟ documentation and their eternal copies [6, 7]. in spite of all gains of 3d digitization of cultural heritage objects, it is still not very commonly used, what is a result of many problems associated with digitization process. 1.1 difficulties in 3d digitization development of hardware devices used in digitization (laser or structured light scanners) is generally much faster than progress of measurement strategies, at least partial automation and data processing software. this discrepancy often results in poor usability of even excellent scanning devices, leaving end-users without means to achieve professional final documentation. typical digitization process contains of planning stage, when measurement strategies are created, scanning process, during which multiple directional measurements (each digitizing small part of an object) are taken, views integration (when directional measurements are integrated in full 3d model) and final processing (filtering, simplifying, triangle mesh creation etc.). by now, most of those stages are performed manually – measurement strategy and placement of scanner in subsequent measurements are done by skilled operator (obviously, those cannot be repeated ________________________________________________________________________________ geoinformatics ctu fce 2011 324 accurately during another measurements), views integration also very often requires human intervention. by lack of automation, each digitization process is unique (non-repeatable and subjective) which disqualifies this method as a mean for creating professional documentation [7]. moreover, time and cost of manually assisted digitization is very high. automation of measurement process, connected with fully autonomous processing of obtained data is clearly an answer to above stated problems. this thesis was the idea behind development of 3dmadmac|automated, the three-dimensional digitization and data processing system for cultural heritage objects presented in this paper. 2. existing digitization solutions to best of our knowledge, no system realizing all above stated assumptions for wide range of objects exist. much work has been done on algorithms for views integration [8 – 12], however most of them require either initial fitting (by operator) or presence of unique objects (on object or in its vicinity), which often renders them unusable. as for automation of measurement process, rather simplified and not time-optimal solutions have been developed (i.e. scanning whole measurement volume by parts regardless if there is some part of an object or not) or ones based of a priori knowledge of object‟s shape (mainly for technical parts with cad documentation [13]). also, much work has been dedicated the problem of navigating measurement device in unknown space (with collision-free path calculated on-line during movement) – so called slam [14] problem [15]; however those solutions cannot be directly implemented in 3d shape digitization system because of different relations between size of measurement object and measurement device (in slam systems scanning head can be treated as point-like object travelling through vast 2.5 dimensional space, not able to change its attitude, in 3d digitizing systems scanning head is often of comparable dimensions with scanning object, cannot travel on the object and has to be positioned with different devices). 3. system concept system realizing ideas presented in introduction should allow for completely automated digitization of cultural heritage objects of given dimensions and weight with some constraints regarding surface parameters and shape complexity. the measurement process should not require any user intervention beside placement of object within measurement volume and entering few controlling parameters (like required resolution, final model format etc.). to obtain full digital representation of object‟s surface scanning head has to be positioned freely within measurement volume around digitized object; moreover, the transitions between subsequent head‟s positions have to be collision – free. positions of head in next measurement have to be calculated automatically on the basis of hitherto obtained object‟s model. positioning system should also report achieved head‟s position which could be used as a mean of initial fitting of directional measurement into whole model (thus eliminating the need of user-interaction during views integration or unique elements presence). furthermore, processing software should be able to perform fine data fitting by means of icp [18] or similar algorithm for vast datasets (with currently achievable resolutions, dataset size can easily be up to hundreds of gigabytes). implementation of this concept, built in cooperation between warsaw university of technology and the wilanow palace museum, uses commercial six-degree-of-freedom robot fixed to extendable column for scanning head‟s positioning, rotating table for object placement, structural light scanner returning measurement results as clouds of points (set of xyz coordinates along with surface normal and color in sampled point). measurement volume of scanning head can be set from 120mm x 80mm x 60mm up to 1200mm x 800mm x 600mm with accuracy better than 1/10 000 in respect to volume‟s largest dimension. maximum dimensions of digitized object can be as large as cylinder of 600 mm radius and 2800mm height while its weigh cannot exceed 2000 kg. photo of real measurement system‟s setup is at figure 1. developed software is composed of modules realizing: 1. measurement strategy (next scanner position) calculation 2. collision – free path planning with inverse kinematics 3. positioning devices‟ control along with real-time collision-detection backup system 4. directional measurements ordering 5. initial data fitting (basing on achieved scanning head‟s position) 6. fine data fitting (by icp algorithm) 7. final integration 8. simplification algorithms, textured triangle mesh generation the application contains 64bit memory manager allowing for virtually unlimited processed data size (a custom implementation of page-file technique, using hard disks as an “extension” of operational computer memory). it is heavily optimized allowing multithreaded processing which utilizes modern cpu‟s capabilities. ________________________________________________________________________________ geoinformatics ctu fce 2011 325 figure 1. measurement system setup. 4. measurement process with examples typical digitization process, beginning with preparation stage and ending with 3d model in format of triangle mesh with texture is presented in following subchapters. all elements of this process are illustrated by pictures from real digitization of cultural heritage objects, namely ii-iii ad roman votive altar from lower moesia and 18th century ceramic figurine of juno, made as imprint from the form created by johann friedrich eberlein in year 1741. this statuette is a part of the collection of museum palace at wilanów. 4.1 preparation stage measurement head, (in our implementation 3dmadmac structural light scanner [19]) calibrated to required resolution and accuracy is fixed to robot‟s wrist, in a way ensuring stiff connection between those two devices. after this stage, the calibration of relations between coordinate system of robot and scanner has to be performed (clouds of points, obtained from the scanner have to be transformed into global coordinate system fixed to positioning devices). this process consists of taking few directional measurements (during which position of scanning head is known only in robot coordinates) of object of unique shape (complicated sculpture or artificial calibration body – figure 2) to allow for automatic initial fitting of clouds. after views integration, the relation between coordinate systems of scanner and robot can be easily calculated. in similar way relations between rotating table‟s axis and robot fixture or axis of column‟s movement can be found. ________________________________________________________________________________ geoinformatics ctu fce 2011 326 figure 2. calibration unit used for identifying relations between coordinate systems. after calibration of coordinate systems, the object to be digitized is placed on the rotatable stage. it should be put in the middle of measurement volume. after fixing object to table, operator has to approximately measure dimensions of virtual cylinder enclosing digitized object ( figure 3). those parameters along with required resolution of final model and its format (cloud of points, triangle mesh) are put into controlling applications interface. afterwards, the proper measurement process can be started. a) b) c) d) figure 3. virtual cylinder encompassing measured object: a) correct; b) radius too large; c) radius too small; d) object not centered. 4.2 digitization measurement process can be divided into three stages, namely locating an object in measurement volume (1), rough scanning (2) and filling of discontinuities (and areas with resolution lower than required)(3). locating an object within measurement volume is tightly connected with proper values of parameters entered by operator at the end of preparation stage. to allow for collision-detection calculations, cylinder encompassing measured object has to be initially treated as unknown (i.e. it cannot be stated that moving scanner through it can be done without collision), and only already measured parts can be marked as empty or full, regarding if any part of an object is present there. therefore, the first part of measurement process consists of locating any part of object‟s surface within the cylinder. this is done by “sculpting” the cylinder‟s volume with subsequent measurements ( figure 4). after obtaining cloud of points representing a part of digitized object, the next stage (rough scanning) is started. during rough scanning, measurement planning algorithms calculate few positions of scanning head which give best possibility of obtaining large part of object‟s surface sample ( figure 5). the algorithm used here locates areas near edges of already obtained model, and using surface normal vectors predicts probable shape of neighbourhood. afterwards, it calculates position and orientation of scanner required to measure this predicted part of object. those candidate positions are sent to module which analyzes if they are ________________________________________________________________________________ geoinformatics ctu fce 2011 327 possible to reach without collisions and if so, the scanner is transferred there. if it is impossible to place scanner in required point because of its collision with part of measurement space not yet known, the algorithm may order taking a scan there to complete space information. when the measurement head is positioned in required position, the scan is ordered. obtained cloud of points, after some postprocessing (filtering, noise cancellation etc.), is transformed through coordinates of scanner in global coordinate system thus initially fitted into existing model. afterwards, fine fitting is performed by automatic icp algorithm (figure 6). figure 4. locating measured object's surface. a) b) figure 5. next measurement directions proposed by rough scanning algorithm for the first measurement obtained from scanning of votive altar: a) all; b) selected. the process continues until view planning algorithm cannot distinguish any large border areas of digitized model (greater than 10% of measurement‟s head volume). afterwards, the final stage is started. during third stage of digitization, areas of low point‟s resolution (lower than required) are identified. they are nested into groups of size similar to measurement head‟s working volume, and for each of those groups the position for scanning head required to measure it is calculated. after the process is completed, full digital model of an object is contained in computer memory. of course, if analyzed object contains areas, which cannot be measured by used scanning head (for example occluded by another parts of object), cannot be digitized. 4.3 final processing after measurement process is completed, the global relaxation algorithm is run, along with automated re -colouring of overexposed areas. in the next stage, data simplification is performed, according to parameter given at start by operator. if it is required to convert cloud of points into triangle mesh, the appropriate algorithm is run at the end of processing. exemplary results are shown at figure 7 and figure 8. ________________________________________________________________________________ geoinformatics ctu fce 2011 328 a) b) c) d) figure 6. automatic processing of directional cloud of points from measurement: a) source cloud (up), noise filtering (down); b) cloud without transformations; c) inital fitting to global model; d) after icp. a) b) c_ d) figure 7. votive altar: a) photo; b) cloud of points; c) triangle mesh; d) triangle mesh with texture. a) b) c) d) figure 8. biscuit: a) photo [20]; b) cloud of points; c) triangle mesh; d) triangle mesh with texture. 6. summary the 3d shape digitization system, 3dmadmac|automated, presented in this paper is the implementation of concept of a tool aimed at popularization of 3d scanning techniques in cultural heritage documentation and presentation. its features address all problems present in commercially available digitizing devices, especially the need of skilled operator‟s supervision and assistance. it was developed as scalable and easily configurable universal measurement system capable of automatic digitization of various classes of objects, especially cultural heritage ones. till now, it has been used to digitize more than 40 works of arts from collections of museum palace at wilanów, academy of fine arts at warsaw and national museum at warsaw. ________________________________________________________________________________ geoinformatics ctu fce 2011 329 parameters of exemplary objects, along with processing time and dataset sizes is presented in table 1. object dimensions material directional scans (rough scanning) directional scans (filling of discontinuities) digitization time point count triangle count votive altar from lower moesia 286mm x 168mm x 168mm sandstone 32 8 19h 03m 20s 141231113 (resolution 0.05mm) 873510 (resolution 0.5 mm) biscuit 335mm x 205mm x 260mm biscuit 142 25 92h 05m 41s 720842124 (resolution 0.02mm) 1213158 (resolution 0.5 mm) table 1. parameters of measured objects 7. references [1] google art project, http://www.googleartproject.com/. [2] grand versailles numérique, http://www.gvn.chateauversailles.fr. [3] the khufu pyramid, http://www.3dvia.com/3d_experiences/view_experience.php?experienceid=1. [4] j. iwaszkiewicz‟s stawisko, http://stawisko.pl/wirtualne/stawisko/index.html. [5] f. chopin‟s piano, http://www.culture.pl/chopin/index.html. [6] e. bunsch, r. sitnik, j. michoński, art documentation quality in function of 3d scanning resolution and precision, proc. spie 7869, 2011, 78690d. [7] e. bunsch, r. sitnik, documentation instead of visualization applications of 3d scanning in works of art ana lysis, proc. spie 7531, 2010, 75310i. [8] dorai, c., wang, g., jain, a.k., mercer, c.: registration and integration of multiple object views for 3d model construction, ieee transactions on pattern analysis and machine intelligence, 20 (1998) 1. [9] zhou, h., liu y.: accurate integration of multi-view range images using k-means clustering, pattern recognition, 41 (2008) 1, 152-175. [10] kapoutsis, c.a., vavoulidis, c.p., pitas, i.: morphological iterative closest point algorithm, ieee transactions on image processing, 8 (1999) 11, 1644 – 1646. [11] zach, c., pock, t., bischof, h.: a globally optimal algorithm for robust tv-l1 range image integration, ieee 11th international conference on computer vision, october 2007, 1 8. [12] sappa, a. d., garcía, m.a.: incremental multiview integration of range image, 15th iapr international conference on pattern recognition, barcelona, september 2000, 546-549. [13] ainsworth, i., ristic, m., brujic d.: cad-based measurement path planning for free-form shapes using contact probes, international journal of advanced manufacturing technology, (2000) 16, 23–31. [14] thrun, s.: robotic mapping: a survey, exploring artificial intelligence in the new millenium san mateo, ca, morgan kaufmann, 2002. [15] gamini dissanayake, m. w. m., newman, p., clark, s., durrant-whyte, h. f., csorba, m.: a solution to the simultaneous localization and map building (slam) problem, transactions on robotics and automation, 17 (2001) 3. [16] menegatti, e., pretto, a., scarpa, a., pagello, e.: omnidirectional vision scan matching for robot localization in dynamic environments, ieee transactions on robotics, 22 (2006) 3. [17] chang, h., j., lee, , c. s. g., lu, y-h., hu, y.c.: p-slam: simultaneous localization and mapping with environmental-structure prediction, ieee transactions on robotics, 23 (2007) 2. [18] besl, p., mckay, n.: a method for registration of 3-d shapes, ieee transactions on pattern analysis and machine intelligence, 14(1992)2, 239 – 256. [19] sitnik ,r., kujawińska, m., załuski, w.: 3dmadmac system: optical 3d shape acquisition and processing path for vr applications, proc. spie 5857, 2005, 106-117. [20] szelegejd, b.: wyrafinowany urok białej porcelany. wilanowska kolekcja biskwitów, warsaw 2006, 71-72. http://www.googleartproject.com/ http://www.gvn.chateauversailles.fr/ http://www.3dvia.com/3d_experiences/view_experience.php?experienceid=1 http://stawisko.pl/wirtualne/stawisko/index.html rating of authors rating of authors editorial dear readers, there is a vast theoretical background for evaluation of scientific works and there are a lot of ways how to, if possible, objectively evaluate the significance and quality of individual theses, authors and researches. various evaluation elements are more or less objective in different branches of research and it is necessary to consider suitability of their use and "justness" of the final comparison. then it depends on each metric, how it uses these and other parameters and how many iterations it logs. elementary ways of calculation of these indicators of quality, their properties and scientific power evaluation of an researcher are briefly explained in this article. citations one of the basic elements of quality and importance evaluation of the article are citations of given publication as well. in this case, with some exaggeration, the offer-demand system should work. citations mean how many times the publication has been cited by other authors. in case of many citations, it is possible that the publication is very important for further study in the field of research. the scientist is obliged to present all sources – the works of others he used. he should minimize the number of non constructive citations of authoritative or prestigious works. often the syndrome of boss’s citations can be seen purpose of crediting authors with high academic degree, or citations of works written by editors and potential reviewers in order to increase the chances of the article for publication. special cases are autocitations. this term means correspondence between authors of cited and citing work. on average, these citations represent 20-30% of the whole amount, databases like web of science are able to count citations with as well as without autocitations. there is also the journal autocitation, that is when articles of the journal are cited, no matter who their author is. not so clearly recognisable are cases when several groups of scientists make citations to help each other, co-workers from an institution or so called second generation cyclic citations citing an article which is citing the writer’s article. cited half-life cited half-life is a median of age of articles that were cited in the journal citation reports (jcr) year. for example, if a journal has cited half-life 7 years, the number means that half of all cited articles were published more than 7 years ago and half less. this number can be used as an index of balanced quality of a journal. altmetrics altmetrics is acronym for alternative metrics, which is trying to consider all influence of science on society. compared to indexes or citations, this metrics include not only articles, but conference presentations, posters, clinical studies, grants or social networks as well. advantage of altmetrics is wider range of sources, disadvantage commercialisation and quality of data. geoinformatics fce ctu 15(2), 2016, doi:10.14311/gi.15.2.0 1 https://doi.org/10.14311/gi.15.2.0 http://creativecommons.org/licenses/by/4.0/ rating of authors eigenfactor score eigenfactor project is an academic research project sponsored by university of washington, which aims on using large network analysis for mapping structure of academic research. compared to older methods, this project works on all stages of connection between articles, includes long-time period calculations of article’s influence and takes into consideration also typical publication activity in relevant science branches. number of articles number of articles express how the author is active in publication work. important thing is to express number of citations for past few years, usually for five. but for comparable result of all researchers is necessary to create indexes. some of the most important are below in this article. hirsch’s index hirsch index or h-index represents how many articles of the author reaches higher amount of citations than the serial number in a row of articles sorted by the number of citations. it is one of the citation indexes which describes response to an article published by an individual researcher hindex(f ) = max(i) · min(f (i), i) · the index can be found for example on the web of science section citation report. it attempts to measure the productivity and an impact of a researcher in a single number. there are number of situations when h-index can be misleading. the main problem is, this index does not account typical number of citations and articles in different fields, but as was mentioned above, some branches are much more active in producing numbers of less significant results. it also does not take into account author’s position in the list of writers, but after all the place is important in some disciplines. figure 1: principle of hirsch’s index geoinformatics fce ctu 15(2), 2016 2 rating of authors m-index the m-index is derived from hirsch’s h-index (h) and depends on number of years (n) between researcher’s first and last publication. formula for m-index is m = h n · the m-index, also proposed by hirsch, is defined as h-index divided by the number of years between the researcher’s first and most recent publication. this allows comparison of early and late-stage scientists by introducing the correction time constant to the hirsch’s index. the m-index averages periods of high and low productivity throughout a career. which may or may not reflect the current situation of the scientist, who can be in this time inactive in publication work, for example because of working on new methods, not having time or results to write about yet. in-index in index represents number of publications with at least n citations, usually featured with n=10 or n=100. g-index let us have all author’s articles in a row according to the number of citations they have, beginning from the highest. this index is the largest number such that the paper with the highest g-index have at least g2 citations. for example, if author’s g-index is 10, his best of the articles must have 100 citations. tori-index the total research impact of a scholar (tori) is calculated using the reference lists of the citing papers (n), without self-citations of course. the addition of each citing article is normalized by the amount of the rest of references in the citing articles (r) and the number of authors in the cited work. in this formula, (a) stands for the number of members in a group of authors, (n) is the count of non-self citations mentioned in the article. the (r) is the number of resources which were used to write the descendant article, except the cited one. the tori-index is defined as the amount of work that others have devoted to the ones research, measured in research papers tori = ∑ n 1 a · r · riq-index this abbreviation means research impact quotient. the relationship between riq-index and tori index is analogous to the bond between mand h-index. h-index is in the formula replaced by the square root of the tori, n is number of years between first and most recent publication and the sum is multiplied by 1000 riq = 1000 √ tori n · geoinformatics fce ctu 15(2), 2016 3 rating of authors conclusion around the scientific world, different ways how to consider the quality of research and it’s impact were created. based on these results, grants politics or scientists migration works. although even the best methods still can not compare absolutely objectively and complexly totally different branches, differences between states and departments. in general, for a scientist is important to publish in good reputed magazines and to be cited by other scientists eminent in their branch of research. veronika stehlíková martin urban ondřej netvich ctu in prague—faculty of electrical engineering geoinformatics fce ctu 15(2), 2016 4 http://orcid.org/0000-0002-3444-9661 http://orcid.org/0000-0003-3121-221x http://orcid.org/0000-0003-1054-1700 editorial: rating of authors testing metadata existence of web map services jan růžička institute of geoinformatics vsb – tu of ostrava jan.ruzicka vsb.cz keywords: web map service, metadata, intelligent map systems abstract for a general user is quite common to use data sources available on www. almost all gis software allow to use data sources available via web map service (iso/ogc standard) interface. the opportunity to use different sources and combine them brings a lot of problems that were discussed many times on conferences or journal papers. one of the problem is based on non existence of metadata for published sources. the question was: were the discussions effective? the article is partly based on comparison of situation for metadata between years 2007 and 2010. second part of the article is focused only on 2010 year situation. the paper is created in a context of research of intelligent map systems, that can be used for an automatic or a semi-automatic map creation or a map evaluation. intelligent map systems intelligent systems, that can help to user with map composition should be based on knowledge base. rules defined in the knowledge base can help with map creation to make it better and correct in a way of defined conditions. the project ” the intelligent system for interactive support of thematic map design” should be focused on rules definition and pilot system creation for using this rules in a process of a map creation. what is necessary to make such system operable are sufficient data inputs. the inputs could be provided by user, when it is necessary. when it is possible the inputs should be available without user’s activity. in many cases are metadata important input for map creation. this article is focused on metadata available for web map services, that can be used for automatic or semi-automatic map creation based on expert system evaluation. geinformatics fce ctu 2010 49 růžička j.: testing metadata existence of web map services web map service and metadata metadata can help with correct data usage. web map service described for example in [1] is a standardised definition of service interface that defines how to requests service to obtain digital map. the interface definition is quite simple and widely supported. that makes wms good servant. what is very important is that a user can simply combine several wms sources with general software. that makes wms very bad master. a typical incorrect usage of wms is described at the figure 1. the user combines two wms sources. the first source publishes undermined areas. the second source publishes cadastral map. both sources are created with different methods that leads into very different positional accuracy. while the cadastral map has maximal error in horizontal position less than 2 meters, the undermined areas can have maximal error in horizontal position more than 50 meters. the possible positional error is described at the figure. a user that does not know this metadata of used sources can make possible wrong decision (for example decide that parcel owned by franta novák is undermined). figure 1: a possible wrong usage of wms sources the idea of automatic or semi-automatic map creation is quite old and for example described in [2]. specification wms contains metadata elements that can help with automatic or semiautomatic map creation. basic metadata elements are available in capabilities document, that is obtained according to getcapabilities request on a service. the metadata are either fully contained in the document (e.q. title, contact, abstract) or referenced to the external document (layer metadata), so from that point of view is everything prepared for automatic or semi-automatic map creation. our question was: how look the metadata in reality? geinformatics fce ctu 2010 50 růžička j.: testing metadata existence of web map services tests of czech wms 2007 year roman kazsper did in 2007 year tests of wms for czech republic that were publicly available. the results are in detail described in [3]. here follows only a summary of the results. at that time were tested about 30 services from the following providers: � regional authority of the south moravia region � regional authority of the pardubice region � regional authority of the hradec králové region � regional authority of the liberec region � czech environmental information agency � forest management institute � t.g. masaryk water research institute some of the services missed keywords and contact information, all services except one provider missed metadata for used geodata. roman kaszper contacted all providers with request to correct their metadata. some of them promised to correct the metadata, most of them did not even answer. 2010 year the services tested in 2007 year were tested in 2010 year to compare results. one provider improve metadata with contact information. one provider moved metadata for layers to a different location than is referenced from capabilities document. so we must said that the situation is probably worse than 5 years ago. test of world wms in 2010 year if the situation in the czech republic is so bad, how looks the situation in the whole world. for the test were identified 559 wms services mainly from usa and europe. situation is described at the following figures. conclusion situation in the czech republic is similar to the global world situation, metadata for layers are not common even in germany. contacting providers and presentation of the results on conferences was not very successful, but we should keep to inform providers about their mistakes. for this purpose, according to tests in 2010 year was prepared a web service available via a web browser for testing wms services. this service is described below. finally we have geinformatics fce ctu 2010 51 růžička j.: testing metadata existence of web map services figure 2: completeness of metadata elements in all tested wms (world) figure 3: completeness of metadata elements in wms from the czech republic to find ways how to work without metadata correctly. one possible option is briefly described below. geinformatics fce ctu 2010 52 růžička j.: testing metadata existence of web map services figure 4: completeness of metadata elements in wms from canada figure 5: completeness of metadata elements in wms from the usa ogc test service this service can help with testing web services based on ogc specifications. at the moment is geinformatics fce ctu 2010 53 růžička j.: testing metadata existence of web map services figure 6: completeness of metadata elements in wms from germany figure 7: completeness of metadata elements in wms from italy functional only first service that validates wms capabilities document according to metadata content. the service is based on similar principle as w3c html or css validator service. a user can specify url of the service in a simple form (fig. 8). the service is requested geinformatics fce ctu 2010 54 růžička j.: testing metadata existence of web map services with getcapabilities request and the response is validated according to existence of metadata elements. if the capabilities response does not contain sufficient metadata the user is informed by error message (fig. 9). if the capabilities response contains default values in metadata items (such as wms, service or arcims) the user is informed by warning message. the service is available at http://gis.vsb.cz/ogctest/. figure 8: wms metadata validator service figure 9: results of validation how to use geodata without metadata in a correct way intelligent map systems are living systems with knowledge base that can grow with every usage. in a case when there is a back correction from an expert then the system can learn which situation leaded to the wrong geodata usage. shared information about geodata usage can help with fill in at least one metadata item, that can be generally named history of geodata usage. in many cases this metadata item is more important than the other. so we can probably in a long term usage use geodata without metadata in a correct way, but there must be intelligent map systems available for public usage and experts for cartography to evaluate created maps. support the article is supported by czech science foundation as a part of the project ga205/09/1159 – the intelligent system for interactive support of thematic map design references 1. wms ogc – web map service – ogc http://www.opengeospatial.org/standards/wms 2. růžička j. pomohou webové služby odstranit nočńı můru kartograf̊u? in. sborńık z konference 16. kartografická konference (mapa v informačńı společnosti). brno. 2005. geinformatics fce ctu 2010 55 http://gis.vsb.cz/ogctest/ http://www.opengeospatial.org/standards/wms růžička j.: testing metadata existence of web map services univerzita obrany. 10s. isbn 80-7231-015-1. dostupné na www: http://gisak.vsb.cz/wsco/publikace/ruzicka 2005 brno.pdf 3. růžička j.; kaszper, r. opět o metadatech v geoinformatice. in. sborńık z konference 1. národńı kongres v česku geoinformatika pro každého. mikulov 29. – 31. května 2007. 8 s. dostupné na www: http://gis.vsb.cz/ruzicka/pub/2007/ruzickakaszperformatovane.pdf geinformatics fce ctu 2010 56 http://gisak.vsb.cz/wsco/publikace/ruzicka_2005_brno.pdf http://gis.vsb.cz/ruzicka/pub/2007/ruzickakaszperformatovane.pdf publication ethics standards in geoinformatics fce ctu editorial dear readers, in our previous editorial (no. 1/2015), we have introduced the basic technological publishing standards. now it is time to follow up and introduce also ethical issues related to scholarly publishing. open access, open publishing, and open scientific environment bring great opportunities as well as unfortunately higher threads in scholarly publishing. the open electronic environment accelerates the information and publishing process and the quantity of information being communicated, however the authors, reviewers and readers are still human beings, and the day still has 24 hours. the technology is quite not that far at all to be able to check human motivation, thoughts, honesty, and righteousness. every possible technical attempt to check the quality of any work, article, journal, review is short when it comes down to human invention. thus, in order to keep high quality in scholarly communication, the worldwide academic community together with the prestigious international scholarly publishers have defined a set of ethical principles in scholarly publishing. these principles are not meant to constrain authors or other parties in their rights in free scholarly publishing. it is meant to prevent fraud that cheats mostly and above all the end readers. an associated issue related not only to publication ethics is worth mentioning as well. people keep inventing many principles focused on preventing negative aspects constraints, criticism, punishments, pursuing negative aspects of human behavior. positive motivation is rare. perhaps an opposite attitude might be considered – to focus on the honest and righteous researchers, to establish a set of positive criteria to reward and motivate those researchers by positive means rather than frightening them with restrictions. is there a place to focus on prevention in the positive manner rather than on punishing negative behavior? especially since many negative aspects might be generated also by lack of experience, not only by purposeful cheating. this is a broad issue which should have an entirely individual article devoted to, however we believe it should have been at least mentioned together with this topic as well. now, back to the reality of ethical standards and criteria of ethical scholarly publishing. many journal editors accept their own ethical standards and publication ethic statements. many of those are members or just follow ethical guidelines and other activities conducted by organizations that focus on keeping ethical principles with research and publishing activities. there are numerous organizations that publish publication ethics statements, e.g. cope (committee on publication ethics), a british membership organization open to all editors of academic journals and anyone who is interested in publication ethics. it provides advices in all aspects of publication ethics, guidelines, samples, and above all it provides help to all its members with any ethics violation cases. besides cope, there are further numerous organizations dealing with research and publication ethics guidance and advisement, most of them focused on a particular research area, e.g. medicine, etc. geoinformatics fce ctu 15(1), 2016, doi:10.14311/gi.15.1.0 1 http://publicationethics.org/ http://dx.doi.org/10.14311/gi.15.1.0 http://creativecommons.org/licenses/by/4.0/ publication ethics standards in geoinformatics fce ctu cope clearly defines the ethical principles that are expected from the authors, reviewers, editors, and the publisher that are acceptable in all research areas. these principles have been adopted by geoinformatics fce ctu journal. most publishers strive to publish original content which brings new findings in the particular research field. authors shall thus submit only original work, which has not been plagiarized and has not been published anywhere else nor has been submitted to other publication simultaneously. authors should list only the real contributors to the work, and should not forget or omit any team member who has been responsible for the work. guest, gift, and ghost authorship are prohibited. all co-authors take collective responsibility for their work and for the content of their publications. authors shall present only complete and final research results. research presented in the publication should have been conducted in ethical manner, research results shall be presented honestly, without falsifications, or any inappropriate data manipulation, i.e. methods used, calculations, findings, and data presentation shall be checked carefully to make sure all information at all stages is correct. all research stages shall be described clearly to allow other research teams to repeat the research. reviewers are expected to accept only such works for review which they have professional expertise for, and they shall keep confidentiality of the reviewed document. the review shall be objective, constructive, not influenced by the authors’ identity or background, or by any commercial issues. most pressure is still being put on the editors who should have the highest motivation to publish authentic and high-quality research results. they are responsible for the content of the journal, they assure quality of the content that they publish, and shall make sure all research published has been done in ethical manners. editors are those who pick reviewers and ensure they are honest and qualified professionals in their research area. editors should also conduct fair communication with the reviewers and authors, they should respond to all comments and complaints by either side and make all effort to resolve these. editors shall further ensure intellectual property rights of the authors. they should keep aware of new issues within intellectual property laws and conventions. by accepting these standards, geoinformatics fce ctu joins the large international family of scholarly publishers of high-quality content who have done all they could to guarantee high scientific and technical quality of the published materials. we wish all readers and editors continuous quality improvement and much success in the future! lenka němečková iva adlerová ctu in prague—central library geoinformatics fce ctu 15(1), 2016 2 http://orcid.org/0000-0003-2297-2532 http://orcid.org/0000-0002-6287-9212 _______________________________________________________________________________________ geoinformatics ctu fce 2011 170 development of a 3d information system for the old city centre of athens nikos kaskampas, kalli spirou-sioula, charalabos ioannidis national technical university of athens, school of rural & surveying engineering 9 iroon polytechniou st, athens 15780, greece nkaskampas@gmail.com; kallispyrou@gmail.com; cioannid@survey.ntua.gr keywords : 3-d city modeling, gis, geometric documentation, visualization, cultural building abstract: the representation of three dimensional city models has been gaining ground increasingly in many scientific fields in the recent years. 3d city modelling is a scale representation of natural and artificial objects in order to present the spatial data and highlight the social development of the city. depending on its importance or the purpose of use, an object can be represented in various levels of detail. an increasing tendency to 3d city models is their integration into gis, which proves to be an effective tool for managing, analyzing and planning in order to make decisions about technical, administrative and financial matters. a combination of digital photogrammetric techniques and laser scanning data contribute greatly to this, since a variety of data, such as aerial, satellite and terrestrial images, point clouds from airborne and terrestrial laser systems, and also a variety of photogrammetric and mobile mapping methods are available. the objective of this paper is the development of a 3d information system (is) for the three-dimensional geometric documentation of the buildings owned by the ministry of culture in the old city centre of athens, greece, named “plaka”. the area has been inhabited continuously since the prehistoric era, it has a special architectural style and includes a number of unique cultural heritage monuments. the data used for the reconstruction of the 3d model of plaka consisted of aerial and terrestrial images, while raster, vector and descriptive data were used for the creation of a 2d gis, which served as the background for the development of the 3d gis. the latter includes all of the qualitative and quantitative information related to the 3d building models owned by the ministry of culture according to users’ needs. each building in the vicinity of plaka was depicted in one of the four differ ent levels of detail created for the purpose of the study, according to their ownership status and other criteria. the building models, depicted in the highest level of detail, were owned by the greek ministry of culture whereas the other buildings (of a lower level of detail) were depicted in a more subtractive way. therefor an integrated is was developed that combines descriptive information, e.g., use, legal status, images, drawings, etc, with the spatial information and geometric documentation in three dimensions. 1. introduction three dimensional (3d) city modelling is a scale representation of natural and artificial objects intended to present the spatial data and highlight the social development of the city. the fast, accurate and reliable reconstruction of such a model provides a comprehensive view of the development and management of the city, which is essential to reduce risks and improve its effective management [1]. so the 3d city modelling has a continuously growing range of applications. the integration of a 3d model into a 3d information system (is) enables the management of data and processes related to a city‟s infrastructure. the major factors requiring the development of a 3d gis for the threedimensional spatial and descriptive information management are the need for better visualization of objects, the restitution of their various attributes in 3d space so that semantically rich georeferenced 3d models may be created, and the possibility of better information management regardless of the types of data storage and operating systems used [2, 3]. these systems have the advantages of a 2d gis while also provide visualization and navigation in the model space. they are used in many study areas such as ecological studies, environmental monitoring, geological analysis, mining exploration, architecture, automatic vehicle navigation, archaeology, 3d urban mapping and landscape planning. the integration of 3d city models in gis not only provides the spatial information of depicted objects but also allows their relation to important information while maintaining the identity of the city, especially when it is of great historic significance. therefore, this integrated system allows the monitoring of the status quo in order to take appropriate measures to safeguard the cultural heritage of buildings, and to intervene at any economic or architectural level [4]. the techniques used to model city objects include terrestrial surveying, aerial surveying and integration processes. the criteria to be considered are the level of detail and the accuracy, the accessibility of the buildings and of their specific characteristics, and the available data in the area. photogrammetry and laser systems (lidar and mobile terrestrial laser scanners) constitute the main tools for geometrical documentation of objects. in many cases integrated mobile mailto:nkaskampas@gmail.com mailto:kallispyrou@gmail.com _______________________________________________________________________________________ geoinformatics ctu fce 2011 171 mapping systems are used, consisting of a moving platform that includes navigation sensors and mapping sensors [5, 6]. mapping sensors include pushbroom line scanners (2d time-of-flight laser scanners) and digital and video frame cameras of usually small format and large frequencies for image capturing (7 – 15 frames per sec) with a small exposure time to avoid blur. usually, camera systems with 6-8 ccd video cameras are used to provide terrestrial panoramic images. automated procedures for 3d model creation are used for fast processing of multiple oblique terrestrial images in combination with terrestrially or aerially collected 3d point clouds [7]. the result is the creation of walkor fly-through models or the panoramic views of the urban spaces (street views). several examples of 3d real or augmented models of various cities all over the world are already available (e.g., for berlin, montreal, new york, tokyo, venice, vienna, paris, etc.) and can be found at web-sites such as www.3dcadbrowser.com, www.cybercity3d.com, www.computamaps.com, etc. in this paper the development of a 3d is for the documentation of the buildings owned by the ministry of culture in plaka, the old city centre of athens, greece, using aerial and terrestrial imagery and field measurements is described. firstly a 2d gis was developed after collecting all the necessary raster, vector and descriptive data, which served as the background for the creation of the 3d gis. the latter includes all the qualitative and quantitative information related to the 3d building models owned by the ministr y of culture, according to user‟s needs. the buildings were depicted in four different levels of detail according to specific criteria (e.g., ownership status, historical or architectural importance, etc). for the implementation of the study a variety of software were used. in particular, data acquisition and editing was achieved using autocad, photoshop cs3, archis and stereo soft kit (photogrammetric packages), while 3d studio max and google sketchup 6.0 were used for the modeling of the buildings. finally, arcinfo 9.3 was used for the development of the 2d and 3d gis. the integration of 3d city models in gis enables the storage of important information so as to maintain the identity of the city, especially when it is of great historic significance. 2. plaka: the old city centre of athens plaka area is situated in the centre of athens, greece, at the foot of the hills where acropolis was built. since the 1ř30‟s when the name plaka was given to the region, it has remained a part of the greater city of athens over the centuries having its history bound up with that of athens. athens has been continuously inhabited since 3500 bc. times of rise and decline succeeded one another making it a place of great changes and interest. a vast amount of monuments and ruins can be found scattered around athens, each one reminding of a different historic era. especially in plaka one can see the ancient agora, roman buildings, such as the library of hadrian and the odeon of herodes atticus, churches of the byzantine period and buildings constructed during the turkish occupation. the layout of plaka has undergone many changes over the centuries. as shown by the traces of walls, the size of the city was increasing or decreasing depending on the periods of prosperity at the time of war or peace. during the archaic period at the time of solon the city occupied a small area of 52 ha around the acropolis which later increased to 250 ha during the hellenistic period until the fall to the franks when the city diminished. at the period of the ottoman rule, the city was slowly beginning to grow and spread. it is the period from which the region has inherited its current layout. today plaka has become a touristic area. the buildings are homogenous since the majority of them were built at the request of the government in 1835 when intensive construction works took place in downtown athens. however, despite the studies that were carried out, the layout of plaka remained unchanged from the years of ottoman rule, and consists of narrow streets and congested construction. 3. data acquisition and processing 3.1 spatial data spatial data has been collected so that all the necessary information could be drawn about the buildings in the area of plaka and the study of its geometric evolution over the centuries. four types of spatial data can be distinguished which were collected for different purposes. these include historical and modern maps, raster data and direct field measurements. the historical maps that were used derived from the cartographic and representational material found in [8] and constitute the product of long-term study and analysis of historical sources. they are a valuable resource for twelve chronological periods from the late neolithic age to the 20th century. through these maps the user can observe the geometric changes in the area of plaka over the centuries by overlaying maps of various time periods. furthermore, users are given the opportunity to gain direct information on the time of creation of these monuments and the expa nsion of athens over the centuries. contemporary digital vector maps derived from secondary procedures and were either used as background to the study area or as a means to retrieve qualitative information about the buildings owned by the ministry of culture. they consist of a topographic map of plaka, at a scale of 1:2000 created in 2002; street layout of plaka created in 1974, plans defining specific conditions and limits on building in plaka created in 1987, land use diagram of plaka created in 1993 and plans defining listed buildings in plaka created in 1980, all at a scale of 1:1000; and detailed façade plans and ground plans of buildings owned by the ministry of culture. the basic background of the http://www.3dcadbrowser.com/ http://www.computamaps.com/ _______________________________________________________________________________________ geoinformatics ctu fce 2011 172 is consists of an orthophotomap and a stereo-restitution drawing produced using aerial images of plaka at a scale of 1:10000, taken in 2001. the outline, roofs and volumes of buildings and blocks have been derived from the latter. the rest of the plans were used to distinguish buildings owned by the ministry of culture from others as well as kο retrieve further information about everyone of these buildings. in addition, necessary geometric information, e.g., control points, detailed information for a 3d reconstruction of the internals of the buildings, etc, have been collected by field surveys. moreover, for the study of the urban development of plaka over the decades, large-scale aerial photographs were collected for five chronological periods from 1929 to 2001. further stereo-photogrammetric processing can lead to the creation of three dimensional city models for various time periods so that the development of the urban network can be studied, and detailed information concerning the area, height and volume of buildings may be determined. 3.2 attributes digital data were collected to provide descriptive data to the gis so that information can be retrieved according to users‟ needs. organizing and storage of data were accomplished through the use of the relational database structure in arcinfo 9.3 (info), where data are shown in tabular form. for each entity the type of descriptive data was defined along with the primary key of each table. apart from the information derived from the spatial maps, a database recording the properties of the ministry of culture in plaka in microsoft office access environment was also used. since the objective of this research focuses mainly on buildings owned by the ministry of culture, their descriptive data when inserted into the system are sufficiently detailed compared with those of other entities which have been mainly used for the visualization of the wider environment. in particular the attributes of the buildings owned by the ministry of culture are the area, the address, the building factor, the qualification as a listed building, the description of the current situation, their ownership‟s transfer to other institutions, the existing use, etc. this information was derived from existing databases and in situ data collection. 4. development of a 3d is the need to acquire three dimensional information in recent years is rapidly growing in many scientific fields such as urban planning, architecture and visualization of environment, etc. this growing need has led to the rapid technological evolution in the production of specific software packages that serve this purpose. therefore, three-dimensional gis have been developed over the last decade as processing systems for spatial and descriptive information [9]. these systems have an advantage over two-dimensional gis due to their capability of visualization and navigation through the depicted model space. however, although these systems should include the same advanced management features and analysis with two-dimensional systems, yet they have not been developed to the same extent. usually, users employ it primarily as a tool for visualization and navigation, rather than as a stand-alone software to address a problem [10]. the use of three-dimensional gis contributes to the representation and management of three-dimensional data which is of great importance for the better understanding of the data location relative to its surroundings. for the purpose of this project, a 3d gis was developed taking advantage of its capabilities in order to represent 3d city models. the creation of this system was facilitated by the development of a 2d gis which includes the entities that will be represented in three dimensions. the system was created in arcscene environment which is a 3d analyst extension for arcgis 9.3 and is used to visualize, manage and analyze three-dimensional vector and raster data. 4.1 two dimensional information system the design of an information system consists of different stages which determine the overall objective of implementation, the data input into the database, relationships between them, their constraints and the logical and physical organization. these stages are the conceptual, logical and physical design of an information system [11]. at the stage of conceptual design, the necessary entities were determined that will be processed in the system along with their attributes. these are the buildings owned by the ministry of culture, other buildings in the area, archaeological sites, churches and blocks. more detailed descriptive data was introduced about the buildings owned by the ministry of culture in accordance with the main subject of study. at the stage of logical design, the method of data organization and the software used to develop the system were selected. the structure chosen for the database is the relational database structure and the data is displayed in tabular form. each entity has its own table where descriptive data is stored. the software used was the arcgis desktop 9.3 and in particular the arcinfo version. firstly, the vector (topographic map) and raster data was inserted into arcmap in order to form the background of the system. then the types of data and their attributes were inserted into the built-in arcgis database (info). at the final stage of the physical design, details concerning data storage and management procedures were determined in order to produce the final products (figure 1). after the database design various procedures were implemented concerning the data management and information retrieval. such were the creation of spatial queries and the connection to external image, multimedia or cad files. _______________________________________________________________________________________ geoinformatics ctu fce 2011 173 figure 1: screen showing vector and raster spatial data of the geographic information system 4.2 three dimensional information system the implementation of the 3d is was held in arcscene environment where the whole process was divided into two information levels, that of the background and of the buildings. for the creation of the background, a digital elevation model (dem) and an orthophoto of plaka were used. the dem was created through the 3d analyst toolbar using digitized contours of the study area, while the final background layer resulted from the process of draping the orthophoto on the surface (figure 2 left). the next step consisted of the creation of 3d buildings which can be displayed at different levels of detail (lod) depending on the users‟ needs. one of the reasons for the representation of b uildings at different levels of detail is the system optimization relating to hardware limitations in a gis environment. for instance, buildings depicted at a lower level of detail usually lead to a more effective management in 3d space. it is therefore obvious that due to computational power, the buildings depicted at higher level of detail refer to small areas compared to buildings of lower detail that can cover larger areas [12]. figure 2: left: creation of the 3d background which resulted from a draping process. right: representation of buildings at the first level of detail (lod). _______________________________________________________________________________________ geoinformatics ctu fce 2011 174 the first and lowest level of detail consists of the two-dimensional depiction of the buildings. this level in which there is no concept of altitude, meaning the volume of the building, shows the outlines of buildings as seen on topographic maps or ground plans (figure 2 right). the depiction of the buildings at this level is the basis for the three -dimensional representations. at the second level of detail the buildings are represented by a plain volume without emphasis on building facades, roofs, etc. at this level of detail which was a plain coloring of the facades, buildings that are not owned by the ministry of culture were represented. the planimetric information came from the topographic map of the area while the vertical one resulted from photogrammetric measurements or on-site registration of floors of the buildings (figure 3 left). figure 3: 3d representation of buildings at the second lod (left) and at the third lod (right) in arcscene. at the third level of detail, the buildings are depicted in more detail as their facades result either from unrefined images or orthophotos depending on users‟ requirements. the roofs of buildings are modeled in order that their geometry can be represented to a greater extent. most buildings owned by the ministry of culture were represented at this level (figure 3 right). in order to determine their height and volume 3d geometric information was collected from the stereo-photogrammetric processing of aerial images that were used to produce the orthophoto of plaka. for the texture mapping of the facades of buildings, terrestrial photographs were taken. specifically, for the buildings requiring more than one photo, geometric projections were carried out on individual photos using the rectification software archis so that a photomosaic could be generated. furthermore, by using the photoshop cs3 software the noise and obstacles appearing in the photos were removed (figure 4). figure 4: photomosaic of a building‟s façade after editing. _______________________________________________________________________________________ geoinformatics ctu fce 2011 175 the next step regarded the creation of the 3d building models using the software google sketchup 6. this software was selected because of its compatibility with arcscene to transfer models through a plug-in in the format of microsoft access (.mdb); the files were imported into arcscene as multipatch features and not as symbols so that they can be managed and analyzed in 3d gis. after importing the outlines of buildings, and on this basis along with the 3d geometric information, the volumes and roofs were modeled. at the fourth and highest level of detail, buildings are depicted in great detail since their facades and roofs are shown in their real form and not entirely through draping images. the more detail the building shows, the more geometric information is inserted in the is which contributes to a more realistic representation. in some cases the buildings owned by the ministry of culture were represented at this level, with varying levels of accuracy. the accuracy of the desired scale sets the ground rules for collecting raw data. such a model was created in 3d studio max and resulted from a combination of geodetic, photogrammetric and topometric methods for a 3d reconstruction at a scale of 1:50. figure 5 illustrates views of the 3d model in 3d studio max (figure 5). figure 5: perspective views of a 3d model building represented at the forth lod, in 3d studio max after the reconstruction of 3d building models, their attributes were imported in the database and linked to useful external files. since the contents of the 3d gis are common to the ones of the 2d gis, the procedures performed to establish the two-dimensional system formed the basis for the three-dimensional one. specifically, the common data was imported to the three-dimensional system by joining the tables of entities between the two systems. 5. products of the information system 5.1 two dimensional products the retrieval of qualitative and quantitative information in two-dimensional is can be accomplished either by selecting specific geospatial data or by defining queries based on criteria. in the first method, arcmap enables the user, through descriptive data of the database, to select any spatial element (e.g. building) using the tool identify, thus obtaining information about the building directly, without having to engage with the database (e.g., through tables of attributes). another advantage of the system is that it gives the user the opportunity to interact with external files, such as image, multimedia or cad files, by selecting spatial data. in particular, the linkage of some buildings owned by the ministry of culture to external files, existing architectural plans, e.g., was implemented using the html popup tool. this tool displays an html page of the selected building containing a list of external data depending on the needs of the user. such files may be textual information, old and new photographs, ground plans, views and sections as well as videos depicting the buildings to a higher level of detail (figure 6 left), which is important in 3d is in terms of volume . another method for information retrieval in a two-dimensional system consists of the selection of spatial data based on attributes and location criteria. in this application the entities concerning the buildings owned by the ministry of culture were selected based on their attributes. for this purpose, several queries were formed, whose results were stored in separate thematic layers for the users‟ convenience. the queries were expressed through sql, including the entities and their descriptive data. 5.2 three dimensional products the extraction of information mentioned above can also be applied to the 3d gis by simply connecting it to the 2d is. by linking tables between the two systems, layers of the three-dimensional system acquire both descriptive information of the features and qualitative information generated by creating links with html popup tool. in this way it is possible _______________________________________________________________________________________ geoinformatics ctu fce 2011 176 to combine the visualization of buildings with the database of the system, so as to extract the relevant information according to users‟ needs. figure 6-right illustrates the corresponding example of connecting to external files in threedimensional level. apart from the thematic information, arcscene enables the user to create a three dimensional virtual tour video, showing all the building models and their surroundings in real time in order to allow navigation in 3d space. in this application, a series of images (keyframes) were captured, which were given a field of view (camera position) at specific intervals of the total length of the video. afterwards, the generated video was exported to .avi format, in order to be compatible with most video player software, while the frame rate was set to 30 frames per second. in order to create a functional application, available computing power must be taken into account, since depending on the number of buildings depicted (which in our study were 132) and their detailed representation, a very powerful computing system may be required. for this reason the 3d building models of lower levels of detail were linked to videos of buildings represented in the highest level of detail and accuracy. such a system becomes more convenient for users while it does not require time-consuming procedures for either the update of existing data, insertion of additional data or even information retrieval in three-dimensional level. figure 6: linkage of buildings to external files via html popup tool in 2d (left) and 3d space (right). 6. conclusions the old city centre of athens is a unique and very interesting area. it combines very important ancient, roman and byzantine monuments in the urban area strictly controlled by limitations in building constructions and land use. thus, it consists of small blocks with low-height buildings (up to 3 floors) and very narrow streets creating a complex urban fabric. the aim of this project was not to create another 3d city model, similar to what has been done for many historical european cities. the main objective was to create a detailed 3d information system for more than 100 buildings owned by the ministry of culture, which are scattered throughout the area. thus an is was developed that combines descriptive information about these buildings, e.g., use, legal status, images, drawings, etc, with the spatial information in three dimensions, with simultaneous geometric documentation and with display of their shell and the surrounding area. in the interest of a complete and proper presentation, the other buildings in the area were represented as 3d models in simpler ways and forms including descriptive data in the database. such a system gives the capability of managing and analyzing large volumes of data that can be linked with external files and constantly updated. this study combines digital photogrammetric techniques with gis. photogrammetry is an efficient, alternative solution to traditional methods of measurement and provides several economic techniques for the acquisition of three-dimensional data. by using digital photogrammetric techniques one can take the appropriate measurements of objects on images, considerably reducing the amount of field work. the three-dimensional products can serve many purposes since the is is a tool for reference, measurement and representation of buildings in three dimensions. in this particular case, the system provides the opportunity of optimal management of properties owned by the ministry of culture, while monitoring the current situation so as to take appropriate measures to safeguard the cultural heritage of buildings or intervene at any economic or architectural building level. further study will be carried out about 3d representation of the plaka area at different periods, so that there is a comprehensive view of changes every few decades while giving the opportunity to manage and enrich the system depending on each project. _______________________________________________________________________________________ geoinformatics ctu fce 2011 177 7. references [1] wang, l., hua, w.: survey and practice of 3d city modelling, lecture notes in computer science, berlin, 2006, vol. 3942, pp. 818-828. [2] benoit,f., alain,l.: 3d city gis – a major step towards sustainable infrastructure, http://ftp2.bentley.com/dist/collateral/whitepaper/wp_3d_city_gis_long.pdf, 2011-05-11. [3] zlatanova, s., rahman, a., pilouk, m., 2002. trends in 3d gis development, journal of geospatial engineering, hong kong, 2002, vol. 4, no. 2, pp. 71-80. [4] kirimura, t., yano, k., kawaguchi, h.: applicability of 3d gis to the view preservation policy of kyoto city, proceedings of 22nd cipa international symposium, kyoto, japan, 2009. [5] gandolfi, s., barbarella, m., ronci, e., burchi, a.: close photogrammetry and laser scanning using a mobile mapping system for the high detailed survey of a high density urban area , proceedings of the international archives of the photogrammetry, remote sensing and spatial information sciences, beijing, china, 2008, vol. xxxvii, part b5, pp. 909-914. [6] tunc, e., karsli, f., ayhan, e.: 3d city reconstruction by different technologies to manage and reorganize the current situation, trabzon, turkey, 2004, www.isprs.org/proceedings/xxxv/congress/comm4/papers/388.pdf, 2011-05-11. [7] malumpong, c., chen, x.: interoperable three-dimensional gis city modeling with geo-informatics techniques and 3d modeling software, , proceedings of the international archives of the photogrammetry, remote sensing and spatial information sciences, beijing, china, 2008, vol. xxxvii, part b2, pp. 975-979. [8] travlos, j: urbanization of athens – from the pre-historic years until the beging of the 19th century, καpον editions, athens, isbn: 960-7254-01-5,1993 (in greek). [9] abdul-rahman, a., pilouk, m.: spatial data modelling for 3d gis, springer-verlag berlin heidelberg, berlin, 2007. [10] stoter, j., zlatanova, s.: 3d gis, where are we standing, http://repository.tudelft.nl/assets/ uuid:baa06f95-bb9443b1-8854-174f4259c17d/gdmc_stoter_2003d.pdf, 2011-05-11. [11] longley, p., goodchild, m., maguire, d., rhind, d.: geographical information systems and science, john wiley & sons ltd, u.k., 2005. [12] kolbe, t. h., gröger g.: towards unified 3d city models, institute of cartography and geoinformation, university of bonn, germany, 2003, www.ikg.uni-bonn.de/fileadmin/sig3d/pdf/cgiav2003_kolbe_groeger.pdf, 2011-05-11. http://ftp2.bentley.com/dist/collateral/whitepaper/wp_3d_city_gis_long.pdf http://www.isprs.org/proceedings/xxxv/congress/comm4/papers/388.pdf http://repository.tudelft.nl/assets/ http://www.ikg.uni-bonn.de/fileadmin/sig3d/pdf/cgiav2003_kolbe_groeger.pdf ___________________________________________________________________________________________________________ geoinformatics ctu fce 267 olomouc possibilities of geovisualization of the historical city stanislav popelka1, alžběta brychtová1 1palacký university in olomouc, faculty of science, department of geoinformatics tuída svobody 26, olomouc, czech republic standa.popelka@gmail.com, alzbeta.brychtova@upol.cz keywords: geovisualization, spatio-temporal change, 3d, 3d map, fortress, olomouc abstract: olomouc, nowadays a city with 100,000 inhabitants, has always been considered as one of the most prominent czech cities. it is a social and economical centre, which history started just about the 11th century. the present appearance of the city has its roots in the 18th century, when the city was almost razed to the ground after the thirty years’ war and a great fire in 1709. after that, the city was rebuilt to a baroque military fortress against prussia army. at the beginning of the 20th century the majority of the fortress was demolished. character of the town is dominated by the large number of churches, burgher’s houses and other architecturally significant buildings, like a holy trinity column, a unesco world heritage site. aim of this project was to state the most suitable methods of visualization of spatial-temporal change in historical build-up area from the tourist’s point of view, and to design and evaluate possibilities of spatial data acquisition. there are many methods of 2d and 3d visualization which are suitable for depiction of historical and contemporary situation. in the article four approaches are discussed comparison of historical and recent pictures or photos, overlaying historical maps over the orthophoto, enhanced visualization of historical map in large scale using the third dimension and photorealistic 3d models of the same area in different ages. all mentioned methods were geolocalizated using the google earth environment and multimedia features were added to enhance the impression of perception. possibilities of visualization, which were outlined above, were realized on a case study of the olomouc city. as a source of historical data were used rapport plans of the bastion fortress from the 17th century. the accuracy of historical maps was confirmed by cartometric methods with use of the mapanalyst software. registration of the spatial-temporal changes information has a great potential in urban planning or realization of reconstruction and particularly in the propagation of the region and increasing the knowledge of citizens about the history of olomouc. 1. introduction visualization of historical objects, cities and landscapes are used in many fields of human activity, but especially in archaeology and tourism. however, most of the projects leave aside the possibility of comparison the historical state with the present (e.g. virtual exploration of the city of livorno [3]. thus, the user loses connection to the reality and then it is hard to imagine where the objects stood. comparison of the past with the present is often done through a pair of photographs or paintings depicting the same place in different time periods. in most cases, a printed publications (e.g. [5]) or calendars are the media, where the spatial information is missing so the user knowledge of the particular place is required. another and more perspective is the variant of the visualization of georeferenced historical maps overlaying the current map or orthophoto. by changing the transparency of the old maps the historical situation can be directly compared with the modern. these maps can be viewed on a large number of mapping portals, or as a layer in google earth, so maps are available to the general public. today, computer-generated perspective views of cartographic content is widespread, often named as 3d maps. the development of geoinformation technologies has facilitated the creation of graphic representations and therefore they are used not only by experts in the field of geoinformatics or cartographers, but also by the general public. as the haeberling [10] states, perspective perception of a generalized and symbolized geographic space offers often a better understanding of spatial coherences. 3d maps could be seen as a supporting complement of classic orthogonal maps. except a digital representation of the contemporary state, which can be used e.g. for educational purposes [20, 21], it is possible to use 3d maps to represent the historic city's face e.g. [16] or country‟s [17]. thanks to the 3d perception, the user can better imagine the spatial context. the physical model, such as very popular bronze models of cities, loses connection to reality, whereas in the case of the digital model it can be viewed in the context of map or the orthophoto. a virtual variant of the 3d model of the town was in the focus of the project rome reborn [8] carried out by experts from italy and the usa. during the project the city of rome was completely created in 3d as it was in 320 bc. the buildings were modeled in height detail, despite of the fact that for ___________________________________________________________________________________________________________ geoinformatics ctu fce 268 most buildings there have not been accessible source of information about how these buildings looked like. 3d models can also present how the city would developed if there would not been an engineering boom and the destruction of historic buildings, as in the case of melbourne [4]. however, more often is a virtual reconstruction of urban areas or individual buildings (e.g. honselaarsdijck palace [1], 3d-arch project [19]). there are many methods of 2d and 3d visualisation which are suitable for depiction of historical and contemporary situation and for following spatio-temporal changes of built-up area. these methods have various levels of reality approximation. the focus of the project was put only on the capabilities of the virtual (computer-based) environment. this article briefly describes methods of geovisualization of spatial-temporal changes in the city. geoviosualization is in this case understood as a set of cartographic tools and techniques supporting processing of data with geographical (spatial) or time component (according to ica). the project outputs have been tested on a case study comparing historical and contemporary face of the city of olomouc, which has changed over the past 150 years from a strong baroque fortress to the modern city. the motivation of the project is to introduce this fundamental change to the general public, and remind the history of the city of olomouc. 2. spatial data and information sources for the processing with the informational content of visualization methods many sources of information and maps have been used. scientific literature documenting the historical context and descriptions of the fortress objects was used [15, 23, 6]. pairs of images were taken from the publication olomoucke promeny [5]. the most important source of spatial information has been rapport plan from the year 1842. it is available in kriegsarchiv in vienna and for the purposes of this paper was lent by the mof's ltd. rapport plan contains a description of the fortress core (alias noyau), there is only floor projection of individual buildings. in determining the approximate height of the buildings knowledge of the preserved buildings height and drawings in the book "fortress olomouc" [15] were used. the last sources of information were historical plans of specific objects. however, they were available for only a few buildings. in the case of 3d digital reconstruction of the historic fortifications of the city of olomouc the only possible method to use was the method of empirical techniques. most of the walls do not currently exist. only historic maps and sources for landscape reconstruction were used without doing extensive research on archaeological reality, where high degree of truthfulness is required. figure 1: rapport plan of olomouc from year 1842. ___________________________________________________________________________________________________________ geoinformatics ctu fce 269 3. geovisualization methods the aim of this project is to create an interactive application used to visualize the spatial-temporal changes in the historical city of olomouc, using five visualization methods textual information, a comparison of historical and contemporary photographs, georeferenced historical maps overlaid over current orthophoto and particularly 3d map of the bastion fortress. the technique of photorealistic 3d model has only been tested on a few objects because of the high time demands and lack of appropriate data. essential goal is to bring near a period of 150 years ago to ordinary internet users by providing a well known google earth environment. the advantage of using google earth is a possibility of spatial localization. not only maps can contain the spatial information but also historical and contemporary photographs or text paragraphs describing the fortress object can be geolocalized. 3.1 textual information textual information can provide a very detailed and accurate description of spatial-time changes. compared to other methods mentioned below, its disadvantage lies in demands on user to apprehend the information from the text and the inflexibility of communication skills in the perspective of different languages. the bases for the preparation of the textual description of spatial-temporal changes were taken from the scientific literature. presentation of the textual information uses visualization environment of google earth and the kml language. the text is presented via the kml placemark element, which contains the definition of the geographic location and the style of information bubbles appearance, whose presence and navigating to can be interactively controlled by the user. the publication of the text in bubbles follows the rules of html and css. 3.2 pictorial information pictorial, especially photographical documentation is a simple procedure for recording the current state of the architectural object. before the invention of the photography it is difficult to say with certainty that the image is a faithful replica of the historic state. the paintings and engravings were created as works of art, or biased, and thus often capture only half-truths. therefore, when reconstruction of the historic state it is necessary to approach the nonphotorealistic works with a caution. the credibility of such information should be supported by archaeological research or period maps. image content can be divided into two main categories perceptual content and semantic content [13]. perceptual content includes attributes such as colour, intensity, shape, texture, and their temporal changes, whereas semantic content means objects, events, and their relations [11]. both components of visual perception are important to monitor spatial-temporal changes. comparison of the current and historical status via a pictorial documentation requires the acquisition of pairs of images of the same object. used pairs of images were taken from the book olomoucke promeny [5]. in the light of the fact that contemporary photographs were taken in this publication without the need for more accurate comparisons with historical, some couples has been due to entirely different viewing angle completely useless. some historical pictures had to undergo a slight transformation (the feature distort in adobe photoshop was used) to fit better in the comparison with the modern photos. for the purpose of this visualization method a simple application were developed using adobe flash and action script. the application allows interactive comparison of two images. the principle is based on user-controlled opacity changing of contemporary image overloaded on the historical figure. each application are, similar to a text expression methods, presented using the kml placemark element in the google earth environment. the same procedure is used, when cdata sections, which defines the reference to the flash swf file, is inserted into the body of placemark. figure 2: flash application for comparing two images from different time periods. ___________________________________________________________________________________________________________ geoinformatics ctu fce 270 3.3 map unlike photographs, maps are abstract models of reality and involve transformations of various kinds [12]. the main task of the map is the presentation of the spatial information. in general, users use map in different ways: for recognition of particular presented objects, for general orientation in the surroundings and for various map measurements [18]. cartography is the most important way to transmit and share geographic knowledge. although map reading requires some experiences with a synthesis of spatial information, the transfer of spatial information from maps is much faster than using text or images [14]. a simple but illustrative and comprehensive method to show spatialtemporal changes is to display the georeferenced historical map over the orthophoto or modern map. the only available historical material for this project is rapport plan from the year 1842, which was georeferenced through esri arcmap software, using buildings and river confluence as control points. to allow access to the wide public was required to publish georeferenced maps on the internet. as in the previous two methods the google earth visualization environment was used. for this reason, it was necessary to create a kmz file containing the map tiles. through this process sufficiently fast to download high-quality maps viewing was maintained. for the cutting map into tiles the freely available tool maptiler has been used. very convenient is the possibility of changing transparency of historical maps, and thus allow comparison with the current situation. the user can easily identify historical defunct sites in the context of detailed contemporary topographic maps or orthophoto. figure 3: visualization of the rapport plan overlaying the orthophoto. the user controlled opacity changing helps to identify historical buildings in the nowadays context. 3.4 3d map three-dimensional visualizations are well-established for the presentation of maps or landscape models. today, the need to present three-dimensional (3d) cartographic content on computer monitors is growing and the possibilities for these presentations are increasing (buchroithner et al. 2011). because the world is the three-dimensional the perception of 3d information is more natural and therefore, in some cases, the 3d visualization is more effective than 2d. in the history it can be traced tendency of using some typical graphic techniques aimed to express the relief of the landscape as realistic as it is possible [18]. usually it was perspective visualization suggesting 3d effect using shading, cross hatching or hill symbols. 3d map contains both semantic and geometric description of the captured area [24] and its visualization in 3d environment gives the user a better idea of space, especially height proportions. the approach of spatial-temporal change visualization using 3d map mentioned in this article counts only with the 3d variant of historical maps, when the comparison with the reality is realized only through the possibility of changing transparency of 3d map over orthophoto. in the case of this project, 3d map is considered as a derivation of the above mentioned rapport plan. individual fortress objects were modelled in 3d textured using images of historical maps and located exactly at their places on the map. the plastic impression has been reached. bastions and other buildings pop up out from the map. users can imagine how the fortress appeared even the 3d map is not a precise representation of the reality.3d map of olomouc bastion fortress was created in the freely available google sketchup. it was created especially for architects, but it is also use as film makers, game developers and many other professions. with the use of push/pull tool of google sketchup and few other operations almost any shape can be created. these shapes can then be mapped by any texture. in this particular case, as a texture image rapport plan was used. this is more natural than using photorealistic textures such as grass, bricks, rocks, etc. ___________________________________________________________________________________________________________ geoinformatics ctu fce 271 figure 4: the procedure of 3d map creation on the basis of the rapport plan. the creation of the 3d map and modelling shapes of the fortress objects very time consuming activity; currently it is prepared only the southern and south-western part of the fortification. 3.5 photorealistic model the most perfect and also the most difficult digital representation of the reality is the photorealistic 3d model. while the digital reconstruction of the defunct buildings it is often very difficult to find relevant historical documentation of the exact dimension and shape of the object. the following text briefly describes the modelling techniques that are appropriate to the acquisition of actual photorealistic 3d models. in the case of modelling non-existing objects only empirical techniques can be applied with use of historical plans, photographs or archaeological research results. large amount of techniques for 3d the digitization of objects exists. the most popular methods are laser scanning and terrestrial photogrammetry. these methods are very expensive and cannot be used for modelling extinct objects. in this case it is necessary to used so-called empirical techniques. the model parameters are then based on historical documents, plans or archaeological research. the creation of detailed worked out textured 3d model of the whole historical city is the work for many people for a long time. for example, the work on above-mentioned rome reborn project, which also uses the google earth environment, lasted an international team of workers more than ten years [22]. for this reason, just samples of only a few buildings that survived to the present were processed. this is the socalled teresian gate, water and the old town barracks and the system of walls in bezruc park. 3d models were same as the 3d map constructed in an environment of google sketchup. figure 5: photorealistic 3d model of old town barracks. 3.6 interface and application preview the google earth environment was used for visualization of spatial-temporal changes. google earth allows inserting overlays such as masthead, legend, notes, etc... navigation between layers is realized via a tree-structure in the left pane. here the user can switch on and off the layer that contains the output from individual visualization methods. for example it is possible to simultaneously view and compare historical map images or text. using animation several flybys of historic fortifications was created, which are also available. ___________________________________________________________________________________________________________ geoinformatics ctu fce 272 figure 5: the environment of interactive application in google earth. in interactive and animated map scenes, objects can be modelled and defined by changing graphic or positional attributes. objects can also easily be created by changing their size or shape. only with the advent of modern computer technology, map objects can be set in motion easily [9]. this project benefited from the first mentioned option flight over the terrain with other time-controlled animated actions. comprehensive guide was created combining all described visualization method, which allows a user to obtain information in a coherent sequence. the base of the interactive guide is to plan the tour route visualization of the informational content in a pre-specified time sequence. the user controls the speed of the tour and the time within which the multimedia blocks of text or image are displayed. an additional multimedia element of the animated flyby can be music or sounds. examples of such application can be an interactive reconstruction of the roman villa at the weilberg [7]. this project benefited from the first mentioned option flight over the terrain with other time-controlled animated actions. comprehensive guide was created combining all described visualization method, which allows a user to obtain information in a coherent sequence. 4. evaluation of methods the evaluation of the five previously mentioned methods of spatial-temporal changes visualization (e. g. text, images, maps, 3d map and photo-realistic 3d model) was done through an expert analysis. the evaluation was divided into two levels. firstly the technological demanding-ness of the processing and preparation methods was investigated. in the second level the user perception was evaluated. in both levels all factors were assessed separately by their severity and the outcome is determined by the weighted average. due to necessity of comparison of two evaluation categories, the absolute values of weighted average were converted to a relative, where 100% corresponds to the highest values. in both categories the highest average was detected in the case of photorealistic 3d model. figure 6: comparison of technological processing costs and user perception quality. ___________________________________________________________________________________________________________ geoinformatics ctu fce 273 visualization methods evaluation according to the technological processing can be considered as totally objective it is based on practical experience with creating visualization applications. the second part, where the perception of visualization was evaluated, is based on qualified presumption. the ideal amount of information is expressed by the point where the difference between the information benefit ratio and the cost of information acquisition is maximal, so the best method of spatial-temporal visualization of change would be a pair of images the difference between the relative costs of processing and effectiveness of the perception in comparison with other described methods is the highest. anyway, from the user perspective it is much more efficient to use methods with higher visualization potential, despite increasing costs. from this point of view, the best method seems to be 3d maps. two-dimensional map is inconvenient because of the low increase of the perception quality considering the high cost growth. the same situation can be observed in the case of the photorealistic 3d model. 5. future plans processing with the visualization of the historic fortification is not yet completed and is still ongoing. as for simple visualization through the text, it is necessary to find the optimal number of labels (informational bubbles) so that the map is not too cluttered. a similar situation arises in the case of images comparison, where the limiting factor is the number of available and suitable photographs or images from the past. in the case of overlaying historical maps, the limiting factor is the difficult situation with the copyright of historical material. the vast majority of olomouc fortress data and information is owned by the viennese kriegsarchiv. our goal is to obtain the maximum number of plans and maps of olomouc fortress from different historical periods. the absence of available data source influences creation of 3d maps and 3d models. in this case, the limiting factor is rather the time. 3d modelling is very time consuming and technically demanding work. fortress consists of a large number of more or less coherent structures. to create 3d maps and 3d models of complete olomouc fortification it is necessary to create a digital terrain model for the whole olomouc because of inaccuracies in the underlying digital terrain model contained in google earth. evaluation of visualization techniques, described in the chapter spatial-temporal changes visualization methods evaluation, is currently quiet subjective. this should be solved during further research of both authors, which will assess the effectiveness of various types of visualization using the device for eyes tracking. currently the created application of spatial-temporal visualization of olomouc city changes is not accessible to the public for reasons of unsolved copyright of historical documents. future use of application can be seen mainly in the tourism, and in the promotion of the city of olomouc. the integration into the emerging museum of the olomouc fortress and to the museum of science under the auspice of palacký university is considered. the absolute goal is to create a complete model of the fort and its implementation in an immersive virtual reality system in order to be able to walk through the historic city. 6. conclusion the article describes five methods to express the spatial-temporal change in the historic town and also describes their advantages and disadvantages using the case study of the historic city of olomouc. 150 years ago the city was a strong fortress, but nowadays a few of the citizens are able to get an idea of the extent of the fortress buildings. all visualization methods use georeferenced data, so the results can be locate in the context of geographical coordinates, which has positive impact on user idea of the historical status of the city. 8. references [1] boer, a., processing old maps and drawings to create virtual historic landscapes, e-perimetron, 2010, vol. 5, no. 2: 49 – 57. [2] buchroithner et al., lenticular creation of thematic multi-layer-models, proceedings of geoviz workshop, 2011, hamburg, germany. [3] carrozino, m. et al., the immersive time-machine: a virtual exploration of the history of livorno, proceedings of 3d-arch conference, 2009, trento, italy. [4] cartwright, w.e., using 3d models for visualizing “the city as it might be”, proceedings of isprs technical commission ii symposium, 2006, vienna, austria. [5] fiala j. (2006) olomoucke promeny 2006, olomouc, danal, 2006. [6] fischer, r., olomoucka pevnost a jeji zruseni, olomouc, 1935. [7] google earth blog, animated 3d models in google earth. in digital form http://www.gearthblog.com/blog/archives/2010/11/animated_3d_models_in_google_earth.html, 2010 [8] guidi, g., frischer, b., lucenti i., rome reborn virtualizing the ancient imperial rome, proceedings of the 3d arch conference, 2007, trento, italy. [9] haeberling, ch., 3d map presentation – a systematic evaluation of important graphic aspects, proceedings of ica mountain cartography workshop "mount hood", 2002, mt. hood, oregon, usa. ___________________________________________________________________________________________________________ geoinformatics ctu fce 274 [10] haeberling, ch., cartographic design principles for 3d maps – a contribution to cartographic theory, proceedings of ica congress mapping approaches into a changing world, 2005, a coruna, spain. [11] jung k., kim k.i., jain a.k., text information extraction in images and video: a survey. pattern recognition, 2004, volume 37, issue 5: 977-997. [12] keates, j. s., understanding maps, london, longman, 1982. [13] kim, h.k., efficient automatic text location method and content-based indexing and structuring of video database. j. visual commun. image representation, 1996, 74: 336–344. [14] kubicek p., kozel j., cartography techniques for adaptive emergency mapping. proceedings of risk models and applications conference, 2010, berlin, germany. [15] kupka, v., kuch-breburda, m., pevnost olomouc, dvur kralove nad labem , fortprint, 2003. [16] langweiluv model prahy, in digital form: http://www.langweil.cz/. [17] niederoest, j., a bird’s eye view on switzerland in the 18th century: 3d recording and analysins of a historical relief model, proceedings of the cipa conference, 2005, antalaya, turkey. [18] petrovic, d., masera p., analysis of user’s response on 3d cartographic presentations, proceedings of 5th ica mountain cartography workshop, 2006, bohinj, slovenia. [19] remondino, f. et al. 3d virtual reconstruction and visualization of complex architectures – the “3d-arch” project, proceedings of the 3d-arch conference, 2009, trento, italy. [20] swiss federal office of topography (2004) atlas of switzerland. interactive and multimedia cd-rom. ed. swisstopo, wabern, switzerland. [21] swiss world atlas interactive (2010) in digital form: http://www.schweizerweltatlas.ch/en. [22] wells, s. rome reborn in google earth, in digital form http://www.romereborn.virginia.edu/rome_reborn_2_documents/papers/wells2_frischer_rome_reborn.pdf. [23] zatloukal r., olomoucka pevnost ve svetle archeologickych nalezu, olomouc fortess, 2004. [24] zebedin, l. et al., towards 3d map generation from digital aerial images. isprs journal of photogrammetry and remote sensing, 2006, volume 60, issue 6: 413-427. ___________________________________________________________________________________________________________ geoinformatics ctu fce 283 the etruscans in 3d: from space to underground fabio remondino1, alessandro rizzi1, belen jimenez1, giorgio agugiaro1, giorgio baratti1,3, raffaele de amicis2 1 3d optical metrology unit, bruno kessler foundation (fbk), trento, italy e-mail: {remondino, rizziale, bjfernandez, agugiaro, baratti}@fbk.eu web: http://3dom.fbk.eu 2 fondazione graphitech, trento, italy e-mail: raffaele.de.amicis@graphitech.it web: http://www.graphitech.it 3 dip. scienze dell‟antichità – university of milano, italy e-mail: giorgio.baratti@unimi.it keywords: photogrammetry, laser scanning, geo-browser, panoramic images, 3d modeling,, visualisation abstract: geomatics and geoinformatics deal with spatial and geographic information, 3d surveying and modeling as well as information science infrastructures. geomatics and geoinformatics are thus involved in cartography, mapping, photogrammetry, remote sensing, laser scanning, geographic information systems (gis), global navigation satellite systems (gnss), geo-visualisation, geospatial data analysis and cultural heritage documentation. in particular the cultural heritage field can largely benefit from different information and communication technologies (ict) tools to make digital heritage information more informative for documentation and conservation issues, archaeological analyses or virtual museums. this work presents the 3d surveying and modeling of different etruscan heritage sites with their underground frescoed tombs da ting back to vii-iv century b.c.. the recorded and processed 3d data are used, beside digital conservation, preservation, transmission to future generations and studies purposes, to create digital contents for virtual visits, museum exhibitions, better access and communication of the heritage information, etc. figure 1: 3d surveying and modeling of a frescoed underground etruscan tomb (hunting and fishing, necropolis of tarquinia, italy) for archaeological documentation, virtual access, digital preservation and architectural analyses. 1. introduction art and history are essential and recognizing elements for a country (or a civilization) – be it a present or a past one – since they play a crucial role in a community's culture and education: heritages must be therefore preserved and made available to the future generations. at the same time, cultural heritages represent an important asset for tourism and it is reasonable to make them (digitally) available to a larger number of people, especially if they can be visited/accessed in a way that preserves them causing no further damages. nowadays disciplines dealing with geomatics and geoinformatics offer (i) great potentialities for the accurate and detailed 3d documentation and digital preservation of existing tangible heritages [1-3] and (ii) a large number of tools to make digital heritages more informative, easier to be visited and enjoyed even remotely. indeed, beside digital documentation and conservation material (figure 1), virtual http://en.wikipedia.org/wiki/information_and_communication_technologies ___________________________________________________________________________________________________________ geoinformatics ctu fce 284 museums, virtual-environment caves and multimedia exhibitions – just to name some – are becoming everyday more common, because they allow for interaction, on-demand information retrieval, integration of artefacts in their original context, virtual tours, immersive experiences for inaccessible sites, time-based 4d navigation, e-education, etc. in many cases, the number of on-line visitors of “virtual museum” has already outnumbered the on-site physical visitors, clearly showing a slow but continuous shift in the perception of how museum contents are “consumed”. digital contents are thus being progressively accepted and enjoyed besides the “classical” physical artifacts. with these points in mind, this paper presents and discusses experiences and results collected by the authors in the framework of 3d surveying and modeling of some etruscan heritage sites in the centre of italy: the two necropolises in tarquinia and cerveteri (unesco world heritage site since 2004) and the necropolis in chiusi. they all feature hypogenous tombs dating back to the vii-iv century b.c.. some tombs are single-roomed, other present a more articulated architectonical structure, with different chambers mostly full of frescoed walls and ceilings. the work followed some fundamental steps in order to satisfy the project requirements (figure 2). 3d data were acquired during three distinct campaigns in 2009, 2010 and 2011, using different techniques that range from terrestrial laser scanning to high-resolution panoramic and multi-spectral imaging. all acquired data were successively used and integrated with other existing sources (e.g. satellite imagery, dtm, tabular data, etc.) to produce heterogeneous, multi-scale digital contents conceived for digital heritage preservation, archaeological and architectural analyses, virtual museums, multimedia exhibitions and webapplications. reality-based 3d models, monoscopic and anaglyph visualizations, interactive virtual tours, a geo-browser platform and some documentary materials were created. they guarantee an accurate and digital documentation as well as access to otherwise hardly reachable underground tombs and locations, which may be often closed to the public for preservation reasons. the produced 3d models were also used for analyses and investigations on the tomb geometries, to validate archaeological hypotheses or to make new ones. figure 2: the different steps in 3d surveying and modeling projects dealing with large sites or complex architectures: specifications, planning, acquisitions, processing and final representation. 2. surveying and 3d modelling 2.1 data acquisition 2.1.1 panoramic and multi-spectral imaging digital images acquired in the visible, ir and uv domains are used for 3d reconstructions (photogrammetry), virtual visits of the sites and diagnostics studies. the documentation with spherical or panoramic photography is becoming a very common practice for many kinds of visual applications (e.g. google street view, 1001 wonders, etc.). in addition, the derivation of metric results from spherical images for interactive exploration, accurate documentation and realistic 3d modeling is receiving great attention due to high-resolution content, large field-of-view, low-cost, easiness, rapidity and completeness of the method [4]. panoramic images are generally acquired with dedicated rotating linear sensors or, more economically, stitching together multiple frame images acquired rotating the camera around the perspective centre. multi-spectral imaging (e.g. visible reflectance, ir reflectography and uv-induced fluorescence) provides very useful information about the employed painting materials, allowing analyses of frescoed areas. multi-spectral images, acquired with a cooled ccd camera coupled with interferential filters, need to be radiometrically calibrated, registered, processed and finally overlapped onto the 3d geometry in order to perform quantitative analyses and differentiate colors and pigments. the latter may be present, hidden or invisible to the naked eye. but materials showing the same color in a certain part of the light spectrum can have different chemical compositions and reflectance spectra and thus can be identified with multi-spectral imaging [5]. ___________________________________________________________________________________________________________ geoinformatics ctu fce 285 2.1.2 range data optical active sensors [6], generally time-of-flight (tof) and triangulation-based range instruments can easily provide dense and accurate 3d point clouds at very high geometric resolution. in case of large and complex sites, tof sensors are an excellent and powerful solution to collect 3d data (figure 3) and create accurate surface models. generally, the instrument has to be located in different positions and afterwards the recorded unstructured point clouds need to be co registered in a unique reference system. figure 3: typical range-based point clouds acquired with a tof laser scanner, visualized in color-code mode. 2.2 3d modelling the acquired range-based point clouds must be converted into structured polygonal models to digitally recreate the geometric shapes of the surveyed objects and allow better photo-realistic visualization. a geometric polygonal model offers much more possibilities than a simple rendering of the point cloud. hdr images, acquired with separate cameras, are mapped onto the reconstructed 3d geometry for photo-realistic visualization, restoration, conservation purposes and ortho-image generation. if multi-spectral images are also mapped onto the 3d geometry, they turn into quantitative information for metric and diagnostic purposes. 3. data visualisation and access 3.1 3d geo-browser the contextualization of an entire civilization or large heritage sites can be achieved with modern geographic browsers (e.g. google earth or nasa world wind) or web-based 3d tools [7-9]. existing 3d-gis databases can be of great value to build up virtual environments with a friendly and flexible user interface, customizing the databases with dedicated cultural information (figure 4). textual information, images, video and 3d models can be added and visualized on-demand together with the 3d geographic information. the user can easily fly over the landscape, activate different layers of data and read archaeological information. figure 4: examples of the customized geo-browser which allow the visualization of information related to the etruscan civilization, its extents in italy, archaeological areas, harbors and mines, necropolises, etc. 3.2 image-based rendering high-resolution panoramic images of a necropolis can be linked together to create virtual tours, which are enriched by means of architectural and archaeological information regarding the site, the civilization and the single monuments ___________________________________________________________________________________________________________ geoinformatics ctu fce 286 (figure 5). such application thus allows a user to virtually navigate through the site, obtain information on-demand (text, images, video, stereoscopic views, etc.) and access areas generally closed to the public for conservation reasons. figure 5: examples from the virtual tour of the banditaccia necropolis in cerveteri. multiple high-resolution panoramas are linked together and navigable allowing an immersive tour of the heritage site, in particular for those areas closed to the public for conservation reasons. additional information (text, images, video, etc.) is provided by means of linkable hotspots. 3.3 monoscopic and stereoscopic visualisation the produced 3d models (figure 6) are rendered using predefined flying paths in monoscopic and stereoscopic (anaglyph) videos. indeed the interactive visualization of large geometric (more than 10 million polygons) and textured 3d models is still problematic and an open issue for the research community. therefore offline presentations and renderings are currently the most exploitable solutions for non-experts and museum exhibitions. figure 6: some views of the created 3d models rendered in shaded or textured mode for monoscopic or stereoscopic visualization. left: tomb of the reliefs (iv cent. b.c., necropolis of cerveteri). centre: tomb of the painted lions (vii cent. b.c., necropolis of cerveteri). right: tomb of the augurs (vi cent. b.c., necropolis of tarquinia). 4. archaeological and architectural analyses heritage 3d models can be used not only for digital documentation, conservation, visualization and data exploration purposes, but also to perform deep analyses on the geometric structures and constructive methods. in the case of a wall, ___________________________________________________________________________________________________________ geoinformatics ctu fce 287 for example, it is possible to analyze its shape and inclination, evaluating the orthogonality with respect to other walls or its perpendicularity with respect to the floor. there are many useful software tools that allow to investigate the geometry of a 3d model, but they are of little help if the geometric quality of the model is inadequate. for this reason, it is very important to use reality-based 3d data representing the surveyed scene as accurately as possible. the 3d model‟s resolution is an important parameter to consider [10] because it is related to the level of detail (lod) the model is conceived for, therefore it is an intrinsic quality parameter regarding the goodness of measurements that can be taken on it: the coarser the resolution, the less reliable and exact the measurements will tend to be. (a) (b) figure 7: geometric 3d models of (a) the tomb of the monkey (16 million triangles) and (b) the bartoccini tomb (15 million triangles). figure 7 presents the tomb of the monkey (necropolis of chiusi, v cent. b.c.) and the bartoccini tomb (necropolis of tarquinia, vi cent. b.c.), chosen because of their multiple frescoed chambers with irregular geometries. for archaeological and architectural analyses, longitudinal and transversal sections are particularly interesting for such complex monuments. using a detailed 3d model, it is indeed possible to visualize both the internal and external parts of the tombs more conveniently and accurately, analyze every single structure and the internal geometries, showing the burial beds, the shapes and profiles of the roofs and allowing the derivation of structure measurements (figure 8). a) b) figure 8: perspective views of the split and exploded textured 3d models of the tomb of the monkey (a) and the bartoccini tomb (b). an orthogonal view of the tomb of the monkey (figure 9a) clearly shows that the tomb is not symmetric and the position of each room is unrelated if compared to the other rooms. an old archaeological survey [11] shows, on the contrary, a more regular geometry of the tomb, with more symmetric side rooms (figure 9b). thus, modern 3d recording and modeling methodologies, coupled with architectural analyses realized with dedicated procedures [12], ___________________________________________________________________________________________________________ geoinformatics ctu fce 288 allow a more precise and correct surveying and structure representation. similar geometric surveying and analyses lead to interesting results and new hypotheses in archaeology [13, 14]. (a) (b) figure 9: tomb of the monkey: comparison between the new produced map (a) and the old one (b). further architectural and geometrical studies were conducted on the inner walls and on their geometric flatness, by comparing each wall with its fitting plane. results, shown in figures 10(a)(c), show for each inner wall a different variation in geometry and no similar value can be found, probably due to structure deteriorations caused by the passing of time. other analyses considered the orthogonality of the inner walls of the rooms with respect to the ground floor. results in figures 10(b)(d) underline that each inner wall has practically the same inclination value and that the inclination is oriented toward the inside of the room. this is coherent with the hypotheses of the archaeologists: etruscan tombs were built in a similar way of etruscan houses. (a) (b) (c) (d) figure 10: geometric analyses on two side rooms of the tomb of the monkey. on the left column (a, c), difference maps between inner walls geometries and their fitting planes. on the right column (b, d), difference maps between inner walls geometries and a vertical plane. ___________________________________________________________________________________________________________ geoinformatics ctu fce 289 thanks to reality-based 3d models it is moreover possible to derive precise metric measures, not only in linear terms as distances from point to point, but also areas and volumes. thus, it is possible to catalogue each tomb with more accuracy and, for example, evaluate how many cubic meters the etruscans extracted during the realization of a tomb. moreover, precise size measurements of rooms, passages, entrances and of other features (e.g. funeral beds) might contribute to derive new hypotheses about the physical characteristics of the etruscan man or regarding the construction techniques. such values were derived for every room of the surveyed tombs. comparisons of these data with analogous ones of other tombs built in different epochs and locations can be now performed, in order to search for similarities or new connections in the etruscan world. for example, the tomb of the monkey features very similar volumes of the lateral rooms (17,3 m3 left room, 17,5 m3 right room), while the frontal room is bigger (24 m3). this difference is not visible during a real-life exploration of the tomb (which is generally forbidden to the public). the main central room spans approximately 47 m3, while the global volume of the entire tomb yields 123 m3. since part of the main room‟s ceiling in the tomb of the monkey collapsed in the past, a virtual model of the reconstructed ceiling could be created starting from the existing surveyed structures. comparative views are given in figure 11 showing not only the geometries, but also the frescoes. the virtual reconstruction analysis allowed to measure the volume of the collapsed part (0,8 m3), too. further studies on the ceiling shapes can also be performed, as shown also in [13, 14]. (a) (b) (c) (d) figure 11: the collapsed ceiling in the tomb of the monkey, shown from outside (a) and inside (b) the surveyed and modeled tomb; the virtual geometric reconstruction of the ceiling seen from outside (b) and inside (d). 5. conclusions geomatics and geoinformatics are showing great potentialities for a better documentation, communication, understanding and preservation of cultural heritage. innovative applications in the heritage field are therefore available, also for the future of museums and exhibitions. digital contents and advanced tools for remote and interactive data access and visualization, immersive experience, virtual visits and e-education are the basis to increase the communication and study of heritage, to ease the retrieval of visual information and to digitally transmit culture to the ___________________________________________________________________________________________________________ geoinformatics ctu fce 290 future generations. as shown in the article, archaeological researchers can have great benefits from reality-based 3d data for their studies. in particular they can deeply investigate many archaeological masterpieces from different points of view and calculate with high accuracy their geometries. thanks to these new methodologies, it might be possible to validate previously existing theories or formulate new ones related e.g. to construction methods. what is expected for the near future is an emerging practice of representing and communicating heritage information through new ict tools, not only with standard documentation material, but also via web applications or in museums and exhibitions. 6. acknowledgments the authors are really thankful to the soprintendenza per i beni archeologici dell‟etruria meridionale (dr a.m. moretti, dr r. cosentino, dr m. cataldi), soprintendenza per i beni archeologici della toscana (dr m. salvini), historia (http://www.historiaweb.it) and art-test (http://www.art-test.com), partners and supporters for the reported work. 7. references [1] ikeuchi, k., miyazaki, d. (eds): digitally archiving cultural heritage, springer, 503 pages, 2007. [2] patias, p.: cultural heritage documentation. in: fryer j, mitchell h, chandler j (eds), application of 3d measurement from images, chapter 9, whittles, dunbeath, 2007. [3] remondino, f., rizzi, a.: reality-based 3d documentation of natural and cultural heritage sites – techniques, problems and examples. applied geomatics, vol.2(3): 85-100, 2010. [4] barazzetti, l., fangi, g., remondino, f., scaioni, m.: automation in multi-image spherical photogrammetry for 3d architectural reconstructions, proc. of 11th int. symposium on virtual reality, archaeology and cultural heritage (vast 2010), paris, france, 2010. [5] remondino, f. rizzi, a., barazzetti, l., scaioni, m., fassi, f., brumana, r., pelagotti, a.: geometric and radiometric analyses of paintings. the photogrammetric record, in press, 2011. [6] vosselman, g., maas. h-g. (eds): airborne and terrestrial laser scanning. crc, boca raton, 318 pp. isbn: 9781904445-87-6, 2010. [7] conti, g., simões, b., piffer, s., de amicis, r.: interactive processing service orchestration of environmental information within a 3d web client. proc. gsdi 11th world conference on spatial data infrastructure convergence, rotterdam, the netherlands, 2009. [8] manferdini, a., remondino, f.: reality-based 3d modeling, segmentation and web-based visualisation. proc. of euromed 2010, lncs 6436, springer verlag, pp. 110-124, 2010. [9] agugiaro, g., remondino, f., girardi, g., von schwerin, j., richards-rissetto, h., de amicis, r., 2011: a webbased interactive tool for multi-resolution 3d models of a maya archaeological site. int. archives of photogrammetry, remote sensing and spatial information sciences, vol. 38(5/w16), on cd-rom. 4th int. workshop "3d-arch 2011: "virtual reconstruction and visualization of complex architectures", 2-5 march 2011, trento, italy. [10] guidi, g., remondino, f., russo, m., menna, f., rizzi, a., ercoli, s.: a multi-resolution methodology for the 3d modeling of large and complex archaeological areas. international journal of architectural computing, vol. 7(1), pp. 39-55. [11] steingraeber, s. (ed): catalogo ragionato della pittura etrusca , jaca book, isbn 88-16-60046-2, 1985. [12] rizzi, a., baratti, g., jimenez, b., girardi, s., remondino, f.: 3d recording for 2d delivering the employment of 3d models for studies and analyses. int. archives of photogrammetry, remote sensing and spatial information sciences, vol. 38(5/w16), on cd-rom. 4th int. workshop "3d-arch 2011: "virtual reconstruction and visualization of complex architectures", 2-5 march 2011, trento, italy. [13] blersch. d, balzani m., tampone g.: the simulated timber structure of the volumnis’ hypogeum in perugia, italy, proc. of 5th int. conference on structural analysis of historical constructions: possibilities of numerical and experimental techniques, vol.1, pp. 327-334, new delhi, india, 2006. [14] blersch. d, balzani m., tampone g., the volumnis’ hypogeum in perugia, italy. application of 3d survey and modeling in archaeological sites for the analysis of deviances and deformations. proc. 2nd int. conference on remote sensing in archaeology “from space to place”, pp. 3řř-394. http://www.historiaweb.it/ http://www.art-test.com/ ___________________________________________________________________________________________________________ geoinformatics ctu fce 307 new sensors for cultural heritage metric survey: the tof cameras filiberto chiabrando1, dario piatti2, fulvio rinaudo2 1politecnico di torino, dinse viale mattioli 39, 10125 torino, italy, filiberto.chiabrando@polito.it 2 politecnico di torino, ditag corso duca degli abruzzi 24 , 10129 torino, italy, dario.piatti@polito.it, fulvio.rinaudo@polito.it keywords: time-of-flight, rim, calibration, metric survey, 3d modeling abstract: tof cameras are new instruments based on ccd/cmos sensors which measure distances instead of radiometry. the resulting point clouds show the same properties (both in terms of accuracy and resolution) of the point clouds acquired by means of traditional lidar devices. tof cameras are cheap instruments (less than 10.000 €) based on video real time distance measurements and can represent an interesting alternative to the more expensive lidar instruments. in addition, the limited weight and dimensions of tof cameras allow a reduction of some practical problems such as transportation and on-site management. most of the commercial tof cameras use the phase-shift method to measure distances. due to the use of only one wavelength, most of them have limited range of application (usually about 5 or 10 m). after a brief description of the main characteristics of these instruments, this paper explains and comments the results of the first experimental applications of tof cameras in cultural heritage 3d metric survey. the possibility to acquire more than 30 frames/s and future developments of these devices in terms of use of more than one wavelength to overcome the ambiguity problem allow to foresee new interesting applications. 1. introduction the 3d information of an object to be surveyed can be basically acquired in two ways: by using stereo image acquisitions or optical distance measurement techniques. the stereo image acquisition is already known and used for decades in the research community. the advantage of stereo image acquisition to other range measuring devices such as lidar, acoustic or radar sensors is that it achieves high resolution and simultaneous acquisition of the surveyed area without energy emission or moving parts. still, the major disadvantages are the correspondence problem, the processing time and the need of adequate illumination conditions and textured surfaces in the case of automatic matching procedures. optical distance measurement techniques are usually classified into three main categories: triangulation based technique, interferometry and time-of-flight (tof). the triangulation based technique normally determines an unknown point within a triangle by means of a known optical basis and the related side angles pointing to the unknown point. this often used principle is partitioned in a wealth of partly different 3d techniques, such as for instance active triangulation with structured illumination and passive triangulation [1]. interferometry measures depth also by means of the time-of-flight. in this case, however, the phase of the optical wave itself is used. this requires coherent mixing and correlation of the wave-front reflected from the object with a reference wave-front. the high accuracies of distance measurements performed with interferometry mainly depend on the coherence length of the light source: interferometry is not suitable for ranges greater than few centimeters since the method is based on the evaluation of very short optical wavelength. continuous wave and pulse tof techniques measure the time of flight of the envelope of a modulated optical signal. these techniques usually apply incoherent optical signals. typical examples of tof are the optical rangefinder of total stations or classical lidar instruments. in this latter case, actual laser scanners allow to acquire hundreds of thousands of points per second, thanks to fast scanning mechanisms. their measurement range can vary to a great extent for different instruments; in general it can vary between tens of meters up to some kilometers, with an accuracy ranging from less than one millimeter to some tens of centimeters respectively. nevertheless, the main drawbacks of lidar instruments are their high costs and dimensions. in the last few years a new generation of active sensors has been developed, which allows to acquire 3d point clouds without any scanning mechanism and from just one point of view at video frame rates. the working principle is the measurement of the tof of an emitted signal by the device towards the object to be observed, with the advantage of simultaneously measuring the distance information for each pixel of the camera sensor. many terms have been used in the literature to indicate these devices, which can be called: time-of-flight (tof) cameras, range imaging (rim) cameras, 3d range imagers, range cameras or a combination of the mentioned terms. in the following the term tof cameras will be prevalently employed, which is more related to the working principle of this recent technology. previous works, such as [2, 3, 4], have already shown ___________________________________________________________________________________________________________ geoinformatics ctu fce 308 the high potentiality of tof cameras for metric survey purposes. in [3] it has been demonstrated that a measurement accuracy of less than one centimeter can be reached with commercial tof cameras (e.g. sr-4000 by mesa imaging) after distance calibration. in that work, an accuracy evaluation of the sr-4000 camera measurements has been reported, with quantitative comparisons with lidar data acquired on architectural elements. in [2] an integrated approach based on multi-image matching and 3d point clouds acquired with tof cameras has been reported. thanks to the proposed approach, 3d object breaklines are automatically extracted, speeding-up the modeling phase/drawing production of the surveyed objects. in [4], an attempt to build up a 3d model of the laocoön-group copy at museum of art at ruhr university bochum using the pmdcamcube2.0 camera is reported. some reflective targets are emplo yed in order to register data acquired from three viewpoints; nevertheless, the systematic distance measurement errors decreased the final 3d point cloud quality. in this work, first a brief overview on commercial tof cameras is reported, in order to show pros and cons of the systems available on the market. then, a comparison between data acquired with two commercial tof cameras and two lidar devices is reported, in order to show the achievable 3d point clouds. moreover, an approach for metric survey and object modeling using tof cameras is reported. thanks to the adopted procedure, it is possible to obtain complete 3d point clouds of the surveyed objects, which can be employed for documentation and/or modeling purposes. finally, some conclusions and future works are reported. 2. tof image sensors there are two main approaches currently employed in tof camera technology: one measures distance by means of direct measurement of the runtime of a travelled light pulse, using for instance arrays of single-photon avalanche diodes (spads) [5,6] or an optical shutter technology [7]; the other method uses amplitude modulated light and obtains distance information by measuring the phase difference between a reference signal and the reflected signal [8]. such a technology is possible because of the miniaturization of the semiconductor technology and the evolvement of the ccd/cmos processes that can be implemented independently for each pixel. the result is the possibility to acquire distance measurements for each pixel at high speed and with accuracies up to about one centimeter in the case of phaseshift devices. while rim cameras based on the phase-shift measurement usually have a working range limited to 10-30 m, tof cameras based on the direct tof measurement can measure distances up to 1500 m. moreover, tof cameras are usually characterized by low resolution (no more than a few thousands of tens of pixels), small dimensions, costs that are one order of magnitude lower with respect to lidar instruments and lower power consumption with respect to classical laser scanners. in contrast to stereo based acquisition systems, the depth accuracy is practically independent of textural appearance, but limited to about one centimeter in the best case (actual phase-shift commercial tof cameras). in the following section, a brief overview on commercial tof cameras is reported. 2.1 commercial tof cameras the first prototypes of tof cameras for civil applications have been realized since 1999 [8]. after many improvements both in sensor resolution and accuracy performance that this technology has undergone in ten years, at the present many commercial tof cameras are available on the market. the main differences are related to working principle, sensor resolution and measurement accuracy. the phase shift measurement principle is used by several manufacturers of tof cameras, such as canesta inc., mesa imaging ag and pmdtechnologies gmbh, to mention just the most important ones. canesta inc. [9] provides several models of depth vision sensors differing for pixel resolution, measurement distance, frame rate and field of view. canesta inc. distributes sensors with field of view ranging between 30° and 114°, depending on the nature of the application. currently, the maximum resolution of canesta sensor is 320 pixel x 200 pixel (canesta “cobra” camera), one of the highest worldwide. some cameras from canesta inc. can also operate under strong sunlight conditions using canesta‟s sunshieldtm technology: the pixel has the ability to substantially cancel the effect of ambient light at the expense of producing a slightly higher noise. in figure 1 some images of tof cameras produced by canesta are reported. figure 1: some models of tof cameras by canesta inc.: canesta xz422, canesta dp200 and canesta cobra (from left to right). ___________________________________________________________________________________________________________ geoinformatics ctu fce 309 mesa imaging ag [10] has developed the swissranger (sr) tof camera series: sr-2, sr-3000 and sr-4000. working ranges up to 10 m are possible with tof cameras by mesa, while the sensor resolution is limited to 176 pixel x 144 pixels (sr-3000 and sr-4000 cameras). mesa imaging ag distributes sensors with field of view ranging between 43° and 6ř°, depending on the selected optics. simultaneous multi-camera measurements with up to three devices are possible by selecting three different modulation frequencies. in figure 2 some images of tof cameras produced by mesa are showed. figure 2: some models of tof cameras by mesa imaging ag: sr-2, sr-3000 and sr-4000 (from left to right). several models of tof cameras have been produced by pmdtechnologies gmbh [11] in the last years. the illumination unit is generally formed by one or two arrays of leds, one for each side of the camera (figure 3). pmdtechnologies gmbh provides several models of tof camera with different features and suitable for measurements also in daylight since all cameras are equipped with the suppression of background illumination (sbi) technology. currently, the pmd devices provide sensor resolutions up to 200 x 200 pixel (pmdcamcube3.0 camera) and a nonambiguity distance up to 40 m (pmda2 camera). the field of view of the latest model (pmdcamcube3.0) is 40° x 40°, but customization for specific applications is possible. also in this case simultaneous multi-camera measurements are possible, thanks to the possibility to select many different modulation frequencies. specific models for industrial applications are also delivered, such as the pmdo3 and pmds3 cameras (figure 3). figure 3: some tof cameras by pmdtechnologies gmbh: pmda2, pmdo3, pmds3 and pmdcamcube3.0 cameras (from left to right). the manufacturers providing commercial tof cameras based on the direct time of flight measurement are only a few. advanced scientific concepts inc. produces several models of tof cameras based on the direct time of flight measurement [12] (figure 4). data acquisition is performed with frame rates up to 44 fps, with optics field of view up to 45°. the proposed technology can be used in full sunlight or the darkness of night. nevertheless, the sensor resolution is limited to 128 pixel x 128 pixel. major applications of these cameras are airborne reconnaissance and mapping, space vehicle navigation, automotive, surveillance and military purposes. the great advantage of this technology is the high working range: measurements up to 1500 m are possible. 3dv systems is the only manufacturer which realizes tof cameras with shutter technology. as for the devices based on the phase shift measurement, the camera parameters, i.e. working distance and opening angle, are strongly related to the illumination unit (i.e. its optical power and its ___________________________________________________________________________________________________________ geoinformatics ctu fce 310 illumination characteristics). one tof camera called zcamii [7] based on the optical shutter approach has been realized by 3dv systems, which provides ntsc/pal resolution, with a working range up to 10 m and field of view up to 40°. another camera by 3dv systems which is mainly employed for real time gaming is the zcam camera: the working range is up to 2.5 m, with centimetric resolution, high frame rate (up to 60 fps) and rgb data with a resolution of 1.3 mpixel thanks to an auxiliary sensor. in fact, a key feature of tof cameras by 3dv systems is that rgb information is also delivered in addition to depth data. for a complete overview on commercial tof cameras (working principle, measurement parameters, distance calibration, etc.) refer to [13]. figure 4: some tof cameras by advanced scientific concepts inc.: dragoneye 3d flash lidar, tigereye 3d flash lidar and portable 3d flash lidar (from left to right). 2.2 distance measurement errors as in all distance measurement devices, tof cameras are typically characterized by both random and systematic distance measurement errors. in some cases, the influence of systematic errors has been strongly reduced by the manufactures, while other camera models still suffer from these error sources, thus limiting their actual applicability without suitable distance calibrations. according to [8], typical sources of noise in solid state sensors can be subdivided in three different classes: photocharge conversion noise, quantization noise and electronic shot noise, also called quantum noise. electronic shot noise is the most dominating noise source and cannot be suppressed. typical nonsystematic errors in tof distance measurements are caused by pixel saturation, “internal scattering”, “multipath effect”, “mixed pixels” and “motion artifacts”. some models of tof cameras suffer from the so called “internal scattering” artifacts: their depth measurements are degraded by multiple internal reflections of the received signal occurring between the camera lens and the image sensor. a common problem to all tof cameras based on phase shift measurement is the “multipath effect” (or “external superimposition”), especially in the case of concave surfaces: small parts of diffusely reflected light from different surfaces of the object may superimpose the directly reflected signals on their way back to the camera. a common problem in data acquired with tof cameras is represented by the so called “mixed pixels” or “flying pixels” or “jumping edges”: they are errant 3d data resulting from the way tof cameras process multiple returns of the emitted signal. these multiple returns occur when a light beam hits the edge of an object and the beam is split: part of the beam is reflected by the object, while the other part continues and may be reflected by another object beyond. the measured reflected signal therefore contains multiple range returns and usually the reported range measurement for that particular ray vector is an average of those multiple returns. finally, when dealing with real time applications or moving objects, the so called “motion artifacts” could affect the acquired data. the result is that tof data are often noisy and characterized by several systematic and random errors, which have to be reduced in order to allow the use of rim cameras for metric survey purposes. 3. 3d object metric survey previous works have already demonstrated the high potentialities of tof cameras for metric survey purposes. in [3] an accuracy evaluation of data delivered by the sr-4000 camera has been reported. the tof data has been compared with more accurate lidar data acquired on an architectural frieze: the results showed that the proposed distance calibration procedure allows reducing the distance measurement error to less than one centimeter. in the following, a qualitative comparison between data acquired with two tof cameras and two laser scanners on the same object is reported, in order to show the performance of rim cameras for metric survey purposes. then, some results obtained with the “multi-frame registration algorithm” proposed by [13] are reported as an example of automatic 3d object reconstruction from multiple viewpoints. ___________________________________________________________________________________________________________ geoinformatics ctu fce 311 3.1 tof versus lidar in order to qualitatively compare data acquired with tof cameras and data acquired with lidar instruments, the architectural frieze of figure 5 has been surveyed with different instruments. first, the object has been surveyed using two tof cameras, the sr-4000 and the pmdcamcube3.0; then, the same object has been surveyed by using two wellknown instruments: the riegl lms-z420 lidar instrument and the s10 mensi triangulation based scanner. in both cases, the rim cameras were positioned on a photographic tripod, in front of the object, and 30 frames were acquired after a warm-up of forty minutes in order to have a good measurement stability [13]. after that, tof data were averaged pixel by pixel in order to reduce the measurement noise. in the case of the sr-4000, distance data was corrected with the distance calibration model [13], while no distance calibration is available for the pmdcamcube 3.0 yet. then, the mixed pixel removal (mpr) filter [13] was applied, in order to automatically remove the “mixed pixels”, which are errant 3d data resulting from the way tof cameras process multiple returns of the emitted signal. data acquired with the riegl lms-z420 laser scanner was filtered with the riscan pro software, while the mensi data was manually filtered with the geomagic studio 10 software. the results of the point clouds acquired on the frieze are reported in figure 5. as one can observe, good results have been obtained with the two tof cameras, even if the point density is lower than in the case of lidar data. in [13] it has been demonstrated that the sr-4000 measurement accuracy on the frieze is few millimeters after distance calibration, therefore comparable to the lidar accuracy. no accuracy evaluation of the pmdcamcube 3.0 measurements has been performed yet, so why only a qualitative comparison is reported in this work. 3.2 automatic 3d object reconstruction with tof data in order to obtain a complete 3d model of the surveyed objects, more than one acquisition viewpoint is usually required with tof cameras. in fact, their field of view is often limited to about 40°, the working range is often smaller than a tens of meters and foreground objects in the scene can occlude background objects (as in the case of lidar acquisitions). since data are acquired from different viewpoints and each point cloud is referred to a local coordinate system fixed to the device, suitable registration procedures have to be adopted in order to register the acquired data. the approach proposed in this work to acquire data is related to the following scene acquisition conditions [13]: the tof camera acquisitions are performed from a stable position (i.e. photographic tripod) in order to acquired several frames (e.g. 10†30) of a static scene; in this way, it is possible to average the acquired frames in order to reduce the measurement noise. moreover, several camera positions are adopted in order to survey the entire object, remembering to maintain an overlap of at least 50% between consecutive camera viewpoints. the choice of acquiring data from few static camera positions is justified by two main reasons: measurement noise reduction thanks to multi-frame acquisition, since the frames acquired from the same position are averaged pixel by pixel; limitation of the accumulated registration error: if the number of consecutive 3d point clouds to be registered increases, the accumulated registration error inevitably increases. the integration time is adjusted for each camera position, in order to avoid saturated pixels while maintaining high amplitude values and, therefore, precise distance measurements. in [13] an algorithm for the automatic registration of tof point clouds has been proposed. the algorithm, called “multi-frame registration algorithm”, allows to automatically perform tof point cloud registration using data coming only from the tof device. it exploits both amplitude data and 3d information acquired by tof cameras. homologous points between two positions of acquisition are extracted from amplitude images obtained after averaging multiple frames (so why the method is called “multiframe registration”). after a robust estimation of the spatial similarity transformation parameters thanks to the least median square estimator [14], the spatial similarity transformation between two adjacent point clouds is estimated in a least square way and the registration is performed, with estimation of the residuals. the procedure is extended to all positions of acquisition. in figure 6, some results of the registration process on data acquired from two positions with the sr-4000 camera on a small area of the topography laboratory façade of the politecnico di torino (italy) are reported in order to show the potentialities of the method. as one can observe from figure 6, the point density increases in the overlap region between the two point clouds. some problems of multiple reflections, mixed pixels and other outliers occurred in correspondence of the window glasses: the mpr filter [13] removed almost all of them, so why only some points are still visible in the internal area of the windows. the differences between the z values (depth direction) of the two point clouds in the overlap region have a mean value of about 0.01 m, which is the measurement accuracy of the sr-4000 camera. therefore, the final 3d point cloud after the registration process has an accuracy of about 0.01 cm, which can be suitable for modeling purposes and/or integration with other survey techniques [2]. 4. conclusions and future works in this paper, an overview about commercial tof cameras and typical systematic and random measurement errors has been reported, in order to show the main characteristics of the available sensors. then, a qualitative comparison between data acquired with two commercial tof cameras and two laser scanners on an architectural object has been reported. the results show the high potentialities of rim cameras for metric survey purposes in the cultural heritage field. tof cameras are cheap instruments (less than 10.000 €) based on video real time distance measurements and can ___________________________________________________________________________________________________________ geoinformatics ctu fce 312 represent an interesting alternative to the more expensive lidar instruments for close range applications. in addition, the limited weight and dimensions of tof cameras allow a reduction of some practical problems such as transportation and on-site management, which are typical of lidar instruments. figure 5: data acquisition and 3d views of: the sr-4000 point cloud after frame averaging, distance correction and mixed pixel removal (first row), the pmdcamcube3.0 point cloud after frame averaging and mixed pixel removal (second row), the mensi point cloud after manual filtering (third row) and the riegl lms-z420 point cloud after filtering with the riscan pro software (forth row). ___________________________________________________________________________________________________________ geoinformatics ctu fce 313 nevertheless, the sensor resolution is still limited and the main problem of phase-shift rim cameras is the limited working range. future developments will probably overcome this problem by using more than one modulation frequency. finally, some results about automatic point cloud registration for 3d object reconstruction using tof cameras have been reported in this work. using suitable registration procedures, it is possible to automatically obtain complete 3d point clouds of the surveyed objects, with accuracy close to the measurement accuracy of the considered device. future works will deal with quantitative comparisons between calibrated tof data and lidar data after performing the automatic registration of rim data acquired from different viewpoints. figure 6: homologous points extracted with surf [15], which is implemented in the multi-frame registration algorithm, from amplitude images acquired from different positions (left); 3d view of the final point cloud after frame averaging, distance correction, mixed pixel removal and automatic registration with data acquired with the sr-4000 camera (right). 5. references [1] jähne, b., haubecker, h., geibler, p.: handbook of computer vision and applications. academic press, 1999. [2] rinaudo, f., chiabrando, f., nex, f., piatti, d.: new instruments and technologies for cultural heritage survey: full integration between point clouds and digital photogrammetry, proceedings of the third international conference, euromed 2010, lemessos (cy) november 8-13, 2010, 5670. [3] chiabrando, f., piatti, d., rinaudo, f.: sr-4000 tof camera: further experimental tests and first applications to metric survey, isprs proceedings, vol. xxxviii/5, commission v symposium, newcastle upon tyne (uk), 2010, 149-154. [4] scherer, m.: the 3d-tof-camera as an innovative and low-cost tool for recording, surveying and visua lization – a short draft and some first experiences, proceedings of cipa symposium, kyoto, japan, 2009. [5] albota, m.a., heinrichs, r.m., kocher, d.g., fouche, d.g., player, b.e., obrien, m.e., aull, g.f., zayhowski, j.j., mooney, j., willard, b.c., carlson, r.r: three-dimensional imaging laser radar with a photon-counting avalanche photodiode array and microchip laser , appl. opt. (2002) 41, 7671-7678. [6] rochas, a., gösch, m., serov, a., besse, p.a., popovic, r.s: first fully integrated 2-d array of single-photon detectors in standard cmos technology, ieee photonic. technol. lett. 2003, 15, 963-965. [7] gvili, r., kaplan, a., ofek e., yahav, g, depth keying, proceedings of spie electronic imaging, vol. 5249, 2003, 534-545. [8] lange, r., 3d time-of-flight distance measurement with custom solid state image sensors in cmos/ccdtechnology. ph.d. thesis, university of siegen (germany), 2000. [9] http://canesta.com/ (accessed on may 2011). [10] http://www.mesa-imaging.ch/ (accessed on may 2011). [11] http://www.pmdtec.com/ (accessed on may 2011). [12] http://www.advancedscientificconcepts.com/ (accessed on may 2011). [13] piatti, d.: time-of-flight cameras: test, calibration and multi-frame registration for automatic 3d object reconstruction. phd thesis, politecnico di torino (italy), 2011. [14] rousseeuw, p.j., leroy, a.m.: robust regression and outlier detection, wiley series in probability and mathematical statistics, wiley, new york, usa, 1987. [15] bay, h., ess, a., tuytelaars, t., gool, l.v., surf: speeded up robust features, computer vision and image understanding (cviu), vol. 110 (3), 2008, 346-359. http://canesta.com/ http://www.mesa-imaging.ch/ http://www.pmdtec.com/ http://www.advancedscientificconcepts.com/ ________________________________________________________________________________ geoinformatics ctu fce 2011 81 from deposit to point cloud – a study of low-cost computer vision approaches for the straightforward documentation of archaeological excavations m. doneus2,1, g. verhoeven1,3, m. fera2,5, ch. briese1,4, m. kucera1,5 and w. neubauer1,5 1lbi for archaeological prospection and virtual archaeology, vienna, austria michael.doneus@archpro.lbg.ac.at 2department for prehistoric and medieval archaeology, university of vienna, austria, 3department of archaeology, ghent university, belgium, 4institute of photogrammetry and remote sensing, vienna university of technology, austria, 5vias – vienna institute for archaeological science, university of vienna, austria keywords: excavation, computer vision, low-cost, 3d single-surface recording, orthophoto abstract: stratigraphic archaeological excavations demand high-resolution documentation techniques for 3d recording. today, this is typically accomplished using total stations or terrestrial laser scanners. this paper demonstrates the potential of another technique that is low-cost and easy to execute. it takes advantage of software using structure from motion (sfm) algorithms, which are known for their ability to reconstruct camera pose and threedimensional scene geometry (rendered as a sparse point cloud) from a series of overlapping photographs captured by a camera moving around the scene. when complemented by stereo matching algorithms, detailed 3d surface models can be built from such relatively oriented photo collections in a fully automated way. the absolute orientation of the model can be derived by the manual measurement of control points. the approach is extremely flexible and appropriate to deal with a wide variety of imagery, because this computer vision approach can also work with imagery resulting from a randomly moving camera (i.e. uncontrolled conditions) and calibrated optics are not a prerequisite. for a few years, these algorithms are embedded in several free and low-cost software packages. this paper will outline how such a program can be applied to map archaeological excavations in a very fast and uncomplicated way, using imagery shot with a standard compact digital camera (even if the ima ges were not taken for this purpose). archived data from previous excavations of vias-university of vienna has been chosen and the derived digital surface models and orthophotos have been examined for their usefulness for archaeological applications. the a bsolute georeferencing of the resulting surface models was performed with the manual identification of fourteen control points. in order to express the positional accuracy of the generated 3d surface models, the nssda guidelines were applied. simultaneously acquired terrestrial laser scanning data – which had been processed in our standard workflow – was used to independently check the results. the vertical accuracy of the surface models generated by sfm was found to be within 0.04 m at the 95 % confidence interval, whereas several visual assessments proved a very high horizontal positional accuracy as well. 1. introduction the process of archaeological excavation aims at a complete description of a site‟s unique stratification. in practice, each single deposit has to be uncovered, identified, documented and interpreted. since this can only be done within a destructive process, high resolution documentation techniques for three-dimensional (3d) single-surface recording (as defined by [1,2]) are essential. among the wide range of possible documentation techniques, total stations are typically used to document the outline and topography of top and bottom surfaces of single deposits. while total stations have become standard tools for documenting archaeological excavations in many countries, a detailed 3d single-surface recording is time consuming, cost-intensive, and provides only a general trend of the topography when dealing with rough surfaces. alternatively, terrestrial laser scanning (tls) has been proposed as a particularly sophisticated method to produce an accurate and detailed surface model [1,2,3]. due to their high acquisition costs, for the time being they are rarely applied at archaeological excavations. another option for fast 3d single-surface recording would be the ________________________________________________________________________________ geoinformatics ctu fce 2011 82 adoption of a photogrammetrical workflow. until recently, however, this alternative was not taken into consideration by many archaeologists, because photogrammetry again was considered to require high expertise, and expensive equipment (hardand software). for a few years, the research field of computer vision, having close ties to photogrammetry, is developing innovative algorithms and techniques to obtain 3d information from photographs in a simple and flexible way without many prerequisites. these are embedded in several free and low-cost computer vision software packages, which allow an extremely flexible and appropriate approach to model surfaces from a wide variety of imagery. the paper will outline how such a program can be applied to map archaeological excavations in a very fast and uncomplicated way, using imagery shot with a standard compact digital camera. in that way, the photographic record of the individual surfaces can be used to create digital surface models and orthophotos. in order to assess the accuracy of the method, the 3d surface models are compared to surface models generated by simultaneously acquired tls data. 2. structure from motion and multi-view stereo a lot of tools and methods exist to obtain information about the geometry of 3d objects and scenes from 2d images. one of the possibilities is to use multiple image views from the same scene. using photogrammetric techniques, an image point occurring in at least two views can be reconstructed in 3d. however, this can only be performed if the projection geometry is known, the latter expressing the camera pose (i.e. the external orientation parameters) and internal calibration parameters. a structure from motion (sfm [4]) approach allows to simultaneously compute both the relative projection geometry and a set of 3d points from a series of overlapping images captured by a camera moving around the scene [5,6]. by detecting a set of image features for every photograph and subsequently monitoring the position of those points throughout the multiple images, the locations of those feature points can be estimated and rendered as a sparse 3d point cloud that represents the geometry/structure of the scene in a local coordinate frame [6,7]. sfm algorithms are used in a wide variety of applications but were developed in the field of computer vision, often defined as the science that develops mathematical techniques to recover the three-dimensional shape and appearance of objects in imagery [6]. recently, sfm received a great deal of attention due to two sfm implementations that are freely available: bundler [ř] and microsoft‟s photosynth [ř]. in this study, the commercial package photoscan (from agisoft llc) is applied. besides the aforementioned sfm approach, photoscan comes with a variety of dense multi-view stereo-matching algorithms (see [10] for an overview). as these reconstruction solutions operate on the pixel values [11,12], this additional step generates detailed meshed models from the initially calculated sparse point clouds, hence enabling proper handling of fine details present in the scenes. in a final step, the mesh can be textured. at this stage, the reconstructed 3d scene – which is still expressed in a local coordinate system – is by at least three manually measured ground control points (gcps) rotated and scaled in order to fit into the absolute coordinate frame. this means that the current approach just relies on one digital still camera, a computer, and a total station. 3. archaeological case study in the past, similar approaches have been applied in digitizing archaeological sites (e.g. [13,14]). however, the sfm and multi-view stereo algorithms have been improved over time (see [12]). a rigorous comparison with simultaneously acquired tls data was also not performed in this earlier work. to test the validity of the presented computer vision approach, a case-study was selected from an excavation in schwarzenbach [15], a multi-period hill fort in the federal state of lower austria, some 60 kilometres south of vienna. archaeological research has been going on since 1989 by vias-university of vienna, including various multidisciplinary projects focusing on archaeological prospection, environmental archaeology, and experimental reconstruction of settlement structures. the site has also functioned as a key-excavation-area for the development of exhaustive digital documentation techniques for stratigraphic excavations [1,2] conducting gis-based single surface documentation using a total station, digital photography, and tls (to capture a detailed documentation of top and bottom surfaces and feature interfaces). 3.1 scene reconstruction subject of this validity test is the top surface of the stratigraphic unit deposit se608s. it has been documented in trench 6 during the 2008 excavation campaign and is part of a burnt bronze age rampart structure. this surface is particularly adequate because its topographic altitude variation is about 0.5 m and the presence of many, variously shaped, sized and oriented stones made the surface reconstruction challenging. besides, the top surface of the deposit with its surroundings was scanned by a riegl lms-z420i laser scanner. the scanner was placed about 7 m above the documented surface, yielding a scanning distance below 10 m. two scanning positions were necessary to document the surface satisfyingly. ________________________________________________________________________________ geoinformatics ctu fce 2011 83 figure 1:one of the ten images (a) out of which photoscan calculated the camera poses (b), a sparse 3d point cloud (b) and a surface model (c). the latter can be georeferenced using gcps (d) and textured (e). the imagery used in this reconstruction was shot in the summer of 2008 using a sony cybershot dsc-r1: a 10 mp digital bridge camera featuring a carl-zeiss vario-sonnar 2.8-4.8/14.3-71.5 mm t* zoom. of those images, all the exif (exchangeable image file)-defined metadata tags were available. to enable orthophoto production, the images were shot as vertical as possible: the photographer was standing on a stepladder, handholding a 2 m long pole on top of which the camera was mounted, reaching a varying camera altitude of 5 to 6 m above the surface. for this study, a small collection of ten images was used (see figure 1a). it needs to be noted that none of those images was specifically acquired for the following approach, but the selected set of images nicely covers the area of interest. after importing all images into photoscan, feature points are automatically detected and described in all the source images. the approach is similar to the well-known sift (scale invariant feature transform) algorithm developed by david lowe [16], since the features are very stable under viewpoint and lighting variations. using these features, the sfm algorithm can relatively orient all the images and estimate the intrinsic camera parameters. the locations of the feature points result in a sparse 3d point cloud that roughly describes the scene in a local coordinate system (figure 1b). in a second step, a dense surface reconstruction is computed. because all pixels are utilized, this reconstruction step (which is based on a pair-wise depth map computation) enables proper handling of fine details present in the scenes and represents them as a 3d mesh (figure 1c). several algorithms are available to do this [10]. three of them – which differ by the way the individual depth maps are merged into the final 3d model – are chosen to compute a total of fifteen digital surface models (see table 1). in a third stage, every dsm is georeferenced by importing the coordinates of fourteen gcps and indicating their position on the photographs (figure 1d). afterwards, a seven parameter similarity transformation converts the surface model into an absolute coordinate system. the maximum horizontal error reported between the computed coordinates and the gcp values acquired by total station was 7 mm. to enable an identical absolute georeferencing for every dsm, dsm 2 to 15 were computed using the images and gcps embedded in the project file from dsm 1. by varying the reconstruction parameters, photoscan computed a new dsm – which was separately stored – while maintaining the gcps position relative to each individual photograph. although it is not necessary for the orthophoto or dsm output, the 3d models can be textured to get a more pleasing representation (figure 1e). finally, every dsm was exported as an ascii file. ________________________________________________________________________________ geoinformatics ctu fce 2011 84 reconstruction comparison with 2 cm tls grid dsm method quality time (s) max. diff. (m) max. + diff. (m) μ diff. (m) μ |diff.| (m) σ ふmぶ rmse (m) 1 height field lowest 4 -0.247 1.086 -0.009 0.023 0.052 0.053 2 height field low 7 -0.283 0.252 -0.012 0.017 0.021 0.024 3 height field medium 34 -0.553 0.246 -0.012 0.015 0.021 0.025 4 height field high 236 -0.292 0.200 -0.012 0.015 0.021 0.024 5 height field ultra high 1775 -0.290 0.326 -0.008 0.016 0.028 0.029 6 smooth lowest 23 -0.198 0.242 -0.010 0.018 0.023 0.025 7 smooth low 89 -0.263 0.252 -0.011 0.015 0.020 0.023 8 smooth medium 398 -0.288 0.189 -0.012 0.015 0.020 0.023 9 smooth high 1857 -0.293 0.183 -0.012 0.015 0.020 0.023 10 smooth ultra high 9407 -0.294 0.186 -0.011 0.015 0.020 0.023 11 exact lowest 5 -0.214 0.399 -0.010 0.019 0.028 0.029 12 exact low 13 -0.293 0.279 -0.011 0.015 0.020 0.023 13 exact medium 74 -0.289 0.220 -0.011 0.014 0.019 0.022 14 exact high 545 -0.294 0.202 -0.011 0.014 0.019 0.022 15 exact ultra high 4524 -0.291 0.176 -0.011 0.014 0.018 0.021 table 1: most important processing parameters and all computed metrics for the fifteen dsms. all computations were performed using an intel® core™ i7-980x processor, nvidia®„s geforce® gtx 580 and photoscan professional 0.8.1 beta running on a microsoft® windows™ 7 ultimate 64-bit machine. 3.2 spatial accuracy and precision assessment notwithstanding the 3d models are very easy to generate, it is prudent to evaluate their accuracy. therefore, all fifteen dsms were compared to tls data, the latter being acquired by riegl´s lms-z420i. the two scanning positions were absolutely georeferenced with riegl reflectors (cylinders). the position of the reflectors was measured with a total station and yielded an average absolute georeferencing rmse of the tls data of 0.011m. finally, riscan pro 1.6.1 was applied to resample, clean and filter (octree) the tls data to reduce the noise and smooth the point cloud to a final point spacing of 1.7 cm (this is our standard workflow that proved to be useful for previous scanning tasks). the georeferenced 3d point cloud was loaded into esri®‟s arcgis® 10 together with the fifteen dsms. those dsm were exported from photoscan using a 2 cm grid spacing since previous research already indicated that large cell sizes can result in quite significant accuracy losses when dealing with complex terrains [17]. additionally, 2 cm seemed a feasible grid spacing considering the density of the used laser point cloud. for the accuracy assessment, a rectangular test area (4 by 4 m) was chosen in which the complete topographic surface variation was present. it was also verified that the point spacing was still 1.7 cm. in this area, all fifteen dsms were sampled for their altitude value on the > 50,000 tls point locations. as the tls measurements were the basis for the comparison, they were handled as the true values. by treating the values of the dsms as observed values, several metrics could be extracted from this dataset (table 1): a maximum positive and negative altitude difference, the mean (μ) difference, the mean of all absolute altitude differences, the standard deviation (σ) and the root-mean-square error (rmse). since absolute accuracy defines how well the observed value corresponds to the true value, rmse is often used to assess the horizontal and vertical positional accuracy. because the standard deviation describes the amount of variation that occurs between all the successive measurements, this metric can be applied to indicate the precision (often called relative accuracy in the field of dsm). it should be noted that in this case, both metrics only provide information on the vertical component of the computed dsms. to incorporate all possible uncertainties in the computed dataset (including those introduced by the gcps), the final vertical accuracy values are expressed at the 95% confidence interval using the national standard for spatial data accuracy (nssda): 1.96 rmsez [18]. this shows that the most accurate surface (dem 15) has an nssda vertical accuracy of 0.041 m, while a vertical accuracy of 0.045m is retrieved for dsm 10. these figures mean that 95% of all the computed 3d points have an error with respect to the true ground position that is smaller or equal to the stated accuracy metric. regarding the fact that both the tls and photoscan georeferencing is accurate to within about 1 cm and, additionally, the tls data is characterised by a noise of ± 1-2 cm in the < 10 m range [19,20], the calculated rmse is more or less falling in the typical random error range. therefore, this test allows one to assume that the photoscan result has more or less the same overall accuracy as the tls data set. ________________________________________________________________________________ geoinformatics ctu fce 2011 85 figure 2: (a) difference grid between photoscan dsm 10 and the tls data that were processed by our standard workflow; (b) the profile a-b indicated in (a) shows the differences between the original point cloud, the photoscan dsm and the dsm extracted by our standard workflow from the tls data. additionally, a visual assessment of both vertical and horizontal positional accuracy is provided in figure 2a, which displays a tls-versus-photoscan difference grid and noticeably reveals the biggest differences (see also table 1), which are in this example situated along some sharp edges. on the one hand, some of these errors are in accordance to the edge effects known from previous tls research [19]. on the other hand, our applied tls workflow (i.e. merging scan positions, resampling and octree filtering) generated a variety of wrong points (certainly when compared to the original point cloud displayed in figure 2a. this is clearly shown in the profile. still, it is remarkable that the computer vision approach was able to retrieve these sharp forms quite well. in the flatter areas, the profiles also expose the lower noise of the photoscan dsm, although the surface might be slightly oversmoothed. the true surface is thus likely somewhere in the middle of both tls and photographic approaches. even when sub-centimetre accuracy is generally not of much importance in excavation recording, photoscan certainly proves its capabilities – at least in this test area – in detecting and modelling very small details. finally, this comparison also revealed some shortcomings of our default tls processing chain (data reduction in order to speed up processing), since the original point cloud (figure 2a) represented the edges much better. ________________________________________________________________________________ geoinformatics ctu fce 2011 86 4. discussion and prospects during the last years, the demand for accurate and fast generation of 3d surface models has been increasing in several domains. archaeology was no exception to this. however, since archaeologists often have to deal with cost constraints, using a laser scanner is not always feasible. in the previous section, it was clearly shown that one can acquire very accurate 3d information about archaeological interfaces using state-of-the-art computer vision approaches. it again needs to be stressed that the imagery using in this comparison was not specifically acquired for this type of approach. the amount of image overlap and the camera positions were not at all optimised for a digital surface reconstruction. still, the accuracy obtained can be considered sufficient for archaeological work. besides, the workflow is very straightforward, only little familiarity with photogrammetry or computer vision is assumed and no expensive hardor software is involved for the data acquisition. however, generating high-quality models from large datasets does require adequate computing resources. finally, also old imagery can be reprocessed into accurate 3d surfaces and orthophotographs. to illustrate this, our approach was applied onto a set of six 1.6 mp handheld oblique images (figure 3a). those were shot more than ten years ago using a canon digital compact camera (powershot pro70) and represent a late neolithic pit (feature interface se30i) found on the multi-period open settlement site of platt in lower austria, 70 km north of vienna [20]. apart from the pixel values, no other data were preserved, meaning that photoscan had no initial focal length values to start from. as in the previous case study, four total station-measured gcps were visible in each image, as well as some in-situ measured breaklines and surface points. as figure 3b indicates, the 3d model retrieved from these archived images is still very useful and more than sufficient for visualisation of the feature interface. only small parts of the interface are lacking, since the bottom was not everywhere equally well covered by digital photographs. notwithstanding, enough digital information was initially captured to allow the production of an accurate orthophotograph. figure 3c shows the rectified photograph that was originally calculated from one of the oblique images using a simple projective transformation. when overlaid with the total station breakline measurements, one can see the big deviations due to topographic displacements and lens distortion. comparing this result with the output produced by photoscan more than a decade later (figure 3d) again highlights the potential of the latter approach. these results should in no way be interpreted as a statement that tls should be replaced by image-based modelling approaches in excavation work. first of all, we were able to generate similar results with both techniques which are usable for archaeological interpretation. secondly, tls has proven its reliability over years. although the current examples prove sfm algorithms to be a very valid alternative for 3d single-surface recording, it has to be stressed that this approach is obviously not perfect. when dealing with very large photo collections, highly oblique images or photographs that have a dissimilar appearance, erroneous alignment of the imagery can occur. besides, it should be clear that high quality reconstructions with large image files are very resource intensive. a multicore processor, a decent amount of ram (minimum 8 gb), a 64-bit operating system and – most importantly a high-end graphical card are minimum requirements for successful processing. table 1 also gives a short overview of the processing times recorded during the reconstruction of the aforementioned dsms. notice how the stepwise increase of output quality comes with a serious time penalty. luckily, the metrics of table 1 show that even lower-quality dsms were more than sufficient to digitally represent the uncovered surface for archaeological documentation. figure 3: (a) one of the platt pit mages; (b) the surface and camera poses recovered by photoscan; (c) a rectified pit image and the photoscan (ps) orthophoto, both overlaid with measured breaklines (see text). ________________________________________________________________________________ geoinformatics ctu fce 2011 87 5. conclusion in this paper, the goal was to present an inexpensive approach to fast and accurate 3d surface recording. the method is mainly based on several computer vision techniques and is very straightforward to execute and integrate in the general excavation methodology. moreover, it also offers the enormous advantage that there are just standard photographic recording prerequisites. apart from a sufficient amount of sharp images covering the scene to be reconstructed and at least three gcps to transform the reconstruction into an absolute coordinate frame, no other information is needed (although exif metadata information – e.g. even gpc coordinates – can be utilized). besides, only a minimal technical knowledge and user interaction are required. finally, this approach can also work in total absence of any information about the instrument the imagery was acquired with. to illustrate this, archived data from previous excavations of vias-university of vienna have been chosen to model feature interfaces after which they were examined for their usefulness in terms of archaeological visualisation and extraction of metric information. to evaluate their geometric accuracy, the 3d models have been compared to simultaneously acquired total station and tls data. although the imagery had been shot before the development of this approach, the dsms generated by photoscan showed only small derivations from those produced by our standard tls-workflow and can therefore be considered as useful for our excavation purposes. while it needs to be stressed that obtaining millimetre accuracy is not an archaeological aim in itself and it will – for most archaeological excavations – not deeply change our fundamental understanding of the past when compared to more conventional registration methods, archaeologists should always strive to document an excavation as detailed and accurately as reasonably possible, since it is a one-time and very destructive event. the lack of financial means to apply an on-site laser scanner or the technical expertise required to use photogrammetrical approaches have often been considered the main hindrances in reaching appropriate 3d excavation documentation, even these days. thanks to the world-wide availability of digital still cameras and the integration of state-of-the-art computer vision and photogrammetry algorithms in a user-friendly software package, all the tools are now available to overcome the previous constraints and establish a straightforward, low-cost workflow for excavation recording that can be executed by technically low-trained archaeologists. the presented case-studies already showed that both image-based and tls approaches have their drawbacks and advantages. however, they can both be considered valid techniques for fast and accurate 3d single-surface recording. even though future investigations under different controlled conditions are necessary to assess the image-based modelling more thoroughly and quantify whether and under which conditions sfm approaches are a reliable documentation technique for archaeological excavations. 5. ackowledgements the ludwig boltzmann institute for archaeological prospection and virtual archaeology (archpro.lbg.ac.at) is based on an international cooperation of the ludwig boltzmann gesellschaft (a), the university of vienna (a), the vienna university of technology (a), the austrian central institute for meteorology and geodynamic (a), the office of the provincial government of lower austria (a), rgzm-romangermanic central museum mainz (d), raä-swedish national heritage board (s), ibm vista-university of birmingham (gb) and niku-norwegian institute for cultural heritage research (n). 6. references [1] doneus, m., neubauer, w.:3d laser scanners on archaeological excavations. in: dequal s.: (ed.) proceedings of the xxth international symposium cipa, torino 2005. the international archives of photogrammetry, remote sensing and spatial information sciences, vol. 36-5/c34/1, 2005, 226-231. [2] doneus, m., neubauer, w.: laser scanners for 3d documentation of stratigraphic excavations. in: baltsavias, e.p., baltsavias, m., gruen, a., van gool, l. & pateraki, m. (eds.). recording, modeling and visualization of cultural heritage. taylor & francis, 2006, pp. 193-203. [3] neubauer, w.: laser scanning and archaeology – standard tool for 3d documentation of excavations. gim international – the global magazine for geomatics 21 (10), 2007, 14–17. [4] ullman, s.:the interpretation of structure from motion. proceedings of the royal society of london b 203, 1979, 405-426. [5] fisher, r., dawson-howe, k., fitzgibbon, a., robertson, c., trucco, e.:dictionary of computer vision and image processing. hoboken, john wiley & sons, 2005. [6] szeliski, r.:computer vision: algorithms and applications. london, springer, 2010. [7] hartley, r., zisserman, a.: multiple view geometry in computer vision. cambridge, cambridge university press, 2004. ________________________________________________________________________________ geoinformatics ctu fce 2011 88 [8] snavely, n.:bundler: structure from motion for unordered image collections. http://phototour.cs.washington.edu/bundler,2010-10-28. [9] microsoft corporation:microsoft photosynth. http://photosynth.net, 2010-10-28. [10] verhoeven, g.:taking computer vision aloft – archaeological three-dimensional reconstructions from aerial photographs with photoscan. archaeological prospection 18(1), 2011, 67-73. [11] scharstein, d., szeliski, r.:a taxonomy and evaluation of dense two-frame stereo correspondence algorithms. international journal of computer vision 47, 2002, 7-42. [12] seitz, s., curless, b., diebel, j., scharstein, d., szeliski,r.:a comparison and evaluation of multi-view stereo reconstruction algorithms. cvpr 2006/1, 519–526. [13] pollefeys, m., van gool, l.:visual modelling: from images to images. the journal of visualization and computer animation 13 (4), 2002, 199–209. [14] pollefeys, m., van gool, l., vergauwen, m., cornelis, k., verbiest, f., tops, j.:3d recording for archaeological fieldwork. in ieee computer graphics and applications 23 (3), 2003, 20-27. [15] fera, m., neubauer, w., doneus, m.:kg schwarzenbach. fundberichte aus österreich 47, 200ř (200ř), 553-555. [16] lowe, d.:distinctive image features from scale-invariant keypoints. international journal of computer vision 60 (2), 2004, 91–110. [17] chen, c., yue, t.:a method of dem construction and related error analysis. computers & geosciences 36(6), 2010, 717-725. [18] federal geographic data committee:geospatial positioning accuracy standards. part 3: national standard for spatial data accuracy (fgdc-std-007.3-199). reston, 1998. [19] böhler, w., bordasvicent, m., marbs, a.:investigating laser scanner accuracy, in: proceedings of the xixthe cipa symposium, antalya, turkey. 30 september-4 october, 2003. [20] staiger, r.:the geometrical quality of terrestrial laser scanner (tls). in: from pharaohs to geomatics. fig working week 2005 and gsdi-8. cairo, egypt april 16-21, 2005. [21] fera, m., neubauer, w., doneus, m., eder-hinterleitner, a.:magnetic prospecting and targeted excavation of the prehistoric settlement platt-reitlüsse, austria. archaeologia polona 41, 2003, 165-167. http://photosynth.net/ gpu-accelerated raster map reprojection petr sloup department of geomatics, faculty of civil engineering czech technical university in prague thákurova 7, 166 29 prague 6, czech republic petr.sloup@fsv.cvut.cz abstract reprojecting raster maps from one projection to another is an essential part of many cartographic processes (map comparison, overlays, data presentation, ... ) and reduction of the required computational time is desirable and often significantly decreases overall processing costs. the raster reprojection process operates per-pixel and is, therefore, a good candidate for gpu-based parallelization. the architecture of gpu allows a high degree of parallelism. the article describes an experimental implementation of the raster reprojection with gpu-based parallelization (using opencl api). during the evaluation, we compare the performance of our implementation to the optimized gdalwarp utility and show that there is a class of problems where gpu-based parallelization can lead to more than sevenfold speedup. keywords: raster reprojection, warping, parallelization, opencl, gpgpu, gpu. introduction representation of the surface of the earth in two dimensions has always been important for performing various tasks such as navigation or planning. although this process has developed significantly throughout the history, it is not possible to represent earth’s surface on a plane without distortion (as proven by [4]). many map projections exist and each distorts the map in a specific way (angles, areas, distances, . . . ). because of this, maps are created in various projections depending on the depicted area and the intended use. there is often need to visually compare two or more maps or even digitally display one map over another (overlay). this task is usually impossible without having the maps in the same projection, which can be achieved through the process of reprojection. precise transformation of digital raster map from one projection to another requires computationally intensive per-pixel calculations, which can cause the reprojection process to take very long time (even hours for larger datasets). because of this, working with many gis applications can be slow and inefficient. this is especially unacceptable in certain situations such as natural crisis management (hurricanes, tsunamis, wildfire, . . . ) when rapid response is crucial in order to minimize property damage or even save lives. longer processing can also be more costly – especially with modern cloud-based computing, which is often charged depending on the computation durations. this article briefly describes the process of raster reprojection, parallelization techniques based on gpu (graphics processing unit) and outlines how it can be utilized to achieve significantly faster reprojection times without reducing output quality. geoinformatics fce ctu 15(1), 2016, doi:10.14311/gi.15.1.5 61 http://orcid.org/0000-0003-4600-0527 http://dx.doi.org/10.14311/gi.15.1.5 http://creativecommons.org/licenses/by/4.0/ p. sloup: gpu-accelerated raster map reprojection raster reprojection as mentioned above, raster map reprojection is a process, when a new raster map in one projection is mathematically derived from an existing raster map in a different projection. the reprojection process inputs are usually: • input properties: raster data (this can be one or more files, possibly accessed via network or too large to decompress as one piece); projection; extent • output requirements: projection; extent (if it differs from the input extent); raster dimensions or resolution • reprojection parameters: resampling method; nodata values; . . . the first phase of the process is to determine the extent of the required source data in order to avoid loading and handling of unnecessarily large data. this is usually achieved using the inverse transformation (from the output projection to the input projection) to transform regularly sampled points (at least corners) of the desired output extent. the bounding box of the transformed points (minimum and maximum in each axis) then outlines the required area in the input data that needs to be processed. then, there are two common approaches to the actual data transformation: a) forward transformation (source-oriented) the more intuitive approach is to read each input data pixel, determine its coordinate in the input projection and then calculate its coordinate in the output projection using the forward transformation. the pixel color is then written to the proper output pixel position. this approach can be quite straightforward to implement on certain platforms, but provides significantly less control over the reprojection process and implementation of different resampling methods can be very complicated. it can also be inefficient for downsampling (when the input raster is significantly larger than the desired output) or when only a subset of the raster is needed. b) inverse transformation (output-oriented) the more common approach is to process individual pixels of the output raster – determine the output pixel coordinate in the output projection and calculate input coordinate using the inverse transformation. the color of the output pixel is then determined by sampling one or more pixels near the appropriate position in the input raster. the particular sampling method depends on specific application needs. this solution provides more control over the process and can be often computationally more efficient (since no unnecessary transformations are performed). the gdalwarp utility – part of gdal (geospatial data abstraction library) [5] – is often used for raster data reprojection between arbitrary projections and data formats. it uses the inverse projection approach described above. geoinformatics fce ctu 15(1), 2016 62 p. sloup: gpu-accelerated raster map reprojection parallelization large datasets usually cannot be processed at once, because the uncompressed raster data would not fit into the computer’s operating memory (or even hard drive). the raster datasets are therefore split into chunks and each of them is processed individually. the process can be parallelized on several levels – ranging from task parallelism (or control parallelism) to data parallelism, which usually scales better with the size of a problem [1]: a) input/output operations the reprojection is done sequentially by a single thread (chunk-by-chunk), but the blocking io operations are done asynchronously in a second thread (preparing data for the main thread). b) raster blocks (chunks) several chunks can be processed in parallel – modern cpus can efficiently run up to 16 threads. gdal implements this parallelization approach. the calculations over particular chunk (including transformations) are, however, still performed sequentially. c) individual pixels the most fine-grained approach is to parallelize the individual per-pixel calculations: inverse transformation, resampling, postprocessing operations. the per-pixel parallelization is not suitable for regular cpu (central processing unit) architecture – the overhead of creating and running thousands of threads on (up to) tens of cores would significantly outweigh any performance gain. modern gpus, on the other hand, can have up to thousands of cores that can be programmed to perform various calculations. this allows the developers to perceive the gpus as parallel computers and employ data-parallel programming style [6]. general-purpose computing on graphics processing units at the beginning of the 21st century, the gpu manufacturers (driven by the entertainment industry) started to implement the programmable pipeline model (as opposed to the fixedfunction pipeline, where the graphics data processing is largely fixed and cannot be programmed). the programmable pipeline allows the developers to manipulate the graphics processing and rendering by writing small programs called shaders. the popularity of this model led to a gradual increase in the number of processing cores, which are used to execute the shaders. the most common frameworks for gpu programming of graphics are opengl (open graphics library) and directx. since a large number of cores (up to thousands) can be very beneficial for many non-graphics applications [9], there has been a significant development in the area of gpgpu (generalpurpose computing on graphics processing units) over the last several years. opencl opencl (open computing language) [8] is a framework for applications executing across heterogeneous platforms such as cpus and gpus. it is an open standard that provides a cross-platform abstraction to the gpgpu capabilities (as well as cpu parallelism). opencl geoinformatics fce ctu 15(1), 2016 63 p. sloup: gpu-accelerated raster map reprojection drivers are available for all the latest graphics cards from major manufacturers for all major operating systems. the framework can be used to create applications running on the host (cpu) that can initiate data transfers and execute programs (called kernels – the analogy of the shaders) on the device (gpu). the kernels are written in opencl c language, which is based on c programming language [7]. the code is compiled at runtime by the hardware-specific opencl drivers to ensure maximal portability. the kernels do not have access to io operations (filesystem, networking, . . . ) – this has to be handled by the host process and all the required data need to be explicitly transferred to the device memory prior to the kernel execution. when the kernel is executed, it can run many times in parallel (up to the number of available cores), but the execution is similar to simd model (single instruction, multiple data; according to flynn’s taxonomy [3]) when running on gpu device – all the threads are executing the same instruction at any given time. it is therefore important to avoid code branching (conditional statements, loops with a dynamic number of iterations, . . . ) to optimize performance. this is a restriction of given hardware architecture (in comparison to cpu), which allows for the much higher number of cores. opencl-accelerated warping experimental implementation was created to evaluate the proposed idea of raster warping using opencl. the implemented method can be summed up into the following steps: 1. the required output raster is divided into several more manageable chunks than can fit into the gpu memory. 2. for each chunk, the required source window is calculated – by applying the inverse transformation on the uniformly sampled grid of 32 × 32 points covering the chunk extent and calculating their bounding box. (the value of 32 was empirically chosen as sufficient for determining the source window. gdal uses 21 by default for a similar process.) 3. the input raster data covering the calculated source window are loaded. 4. the output pixels (covering the chunk) are calculated using the inverse transformation approach described above. 5. the calculated chunk content is written to the proper position in the output file. steps 1 and 2 are executed sequentially and only once. steps 3–5 are executed sequentially and repeated for every chunk, but the execution of these steps can overlap, so the reprojection itself can run in parallel with the io operations. during the reprojection (step 4) the input data has to be explicitly transferred from cpu memory to gpu memory and output buffer has to be allocated. the kernel function is designed to calculate color of a single output pixel (output pixel position → coordinate in output projection → coordinate in input projection → input pixel position → sample the input buffer) and write it to the output buffer. the calculation is carried inside the kernel that is written in opencl c and compiled at the start of the application. geoinformatics fce ctu 15(1), 2016 64 p. sloup: gpu-accelerated raster map reprojection this means, however, that the coordinate transformation needs to be written in opencl c. gdal uses proj.4 library [2] for the transformations, but in our implementation selected projections were ported manually. our implementation also uses gdal drivers for reading and writing files to ensure the performance of input/output operations is comparable and does not distort the performance evaluation of the parallelization itself. evaluation the performance of the experimental implementation was evaluated on common desktop computer (intel core i5-6600 @ 3.30 ghz×4 cores, 16 gb ram) running ubuntu 15.10 64bit. internal ssd was used for reading input files and storing outputs. the computer was equipped with amd radeon r9 380 graphics card with 4 gb memory and 1792 cores clocked at 1000 mhz. the blue marble next generation dataset [10] was used for the evaluation process (various input sizes, see table 1 for details). the reprojection was performed from wgs84 (epsg:4326) to mollweide projection with bilinear resampling method being used. figure 1: the blue marble next generation dataset [10] displayed in epsg:4326 (left) and the mollweide projection (right) the gdalwarp utility (part of gdal version 1.11.2) was used for verification and performance comparison with two different levels of parallelization. the following parameters were used for the first test: gdalwarp -s_srs epsg:4326 -t_srs +proj=moll -multi -wm 500 (the -multi argument enables parallelization of computation and io operations; -wm 500 increases allowed memory usage to 500 mb to increase performance). for the second test, -wo num_threads=all_cpus was added to enable parallelization of the raster chunk processing up to the maximal number of threads the cpu can operate at once (equals to 4 on the testing pc). results from the gdalwarp and our implementation were compared using idiff utility. the outputs were per-pixel identical inside the bounds of the projection. see table 1 for a detailed comparison of execution times for different input and output sizes. results although the evaluation of the initial implementation shows certain performance gain when compared to gdalwarp, certain limitations can be observed. geoinformatics fce ctu 15(1), 2016 65 p. sloup: gpu-accelerated raster map reprojection table 1: performance testing of opencl warping in comparison to gdal. values in gdal† are for parallelizing io operations with computation, while values in gdal‡ are for parallelizing also the individual chunk processing. times for the small dataset are averaged from 100 consecutive independent executions, for the other datasets from 5 executions. input [px] output [px] gdal† [s] gdal‡ [s] opencl [s] speedup small 1024×512 1024×512 0.2051 0.0992 0.2562 0.39× 2048×1024 0.7054 0.2667 0.3756 0.71× large 21600×10800 1024×512 2.34 2.30 1.83 1.26× 8192×4096 11.87 5.21 1.97 2.64× 21600×10800 73.41 27.65 3.80 7.28× huge 86400×43200 8192×4096 173.15 172.29 168.28 1.02× 21600×10800 198.64 191.16 172.17 1.11× 86400×43200 980.82 865.37 528.68 1.64× the computation time for the small dataset turned out to be actually worse than gdalwarp. this is caused by the fact, that the performance gain from opencl parallelization is smaller than the runtime kernel compilation overhead. the effect, however, would be less significant when more computations are required (complex transformations, postprocessing, etc. – see below) or for batch processing use cases (which would require only one kernel compilation for warping multiple datasets and/or extracting more extents). processing of the huge dataset also yielded no significant performance gain (for the smaller output sizes) since the time required for the loading of the input data and memory management is far longer than the processing time. although the relative speedup of 1.64× (in the case of the largest output size) does not seem to be very significant, the absolute time difference is more than 5 minutes (14:24 vs 8:48), which can be very important in certain situations. overall, the memory transfers (cpu ram ↔ gpu vram) are the most time-consuming part of the process. therefore, the performance gain of this parallelization approach is more significant for reprojections with more complex per-pixel calculations: mathematically complex transformation, resampling method, and possibly postprocessing in the future (noise reduction, color corrections, edge detection, . . . ). conclusions and future work in this paper, we have briefly described a possible approach to gpu-based per-pixel parallelization of raster map reprojection process. the experimental implementation has shown that there is a set of problems that can significantly benefit from gpu-based acceleration. to fully utilize the potential of this parallelization method, the amount of raw computation needs to be as high as possible relative to the amount of data transfers. it is therefore most suitable for high output resolutions (same as the input resolutions) or even upscaling. on the other hand, this approach is not very effective for small datasets, where the overhead of initializing the opencl environment is too high to have any actual benefit, or for significant downsampling, where the whole input needs to be copied to the gpu, which takes too long geoinformatics fce ctu 15(1), 2016 66 p. sloup: gpu-accelerated raster map reprojection in comparison to the amount of the subsequent transformation calculations. in the future implementations, the benefit of this parallelization approach can further increase with more complex transformations (e.g. warping based on ground control points) and resampling methods. furthermore, various raster operations (e.g., noise reduction, color corrections, color mapping) can also be performed really fast both before and after the warping since the data are already present in the gpu memory. development of a tool for batch processing would also help better utilize the gpu potential – the opencl initialization could only be done once for multiple input and output datasets. similarly, this parallelization approach could be beneficial for creating tile pyramids – input file would be loaded and transferred to gpu memory only once and used to create many smaller files. acknowledgements this work has been done as part of ph.d. research at czech technical university in prague in cooperation with the klokan technologies gmbh for the future version of the maptiler product (http://www.maptiler.com/). references [1] d. e. culler, j. p. singh, and a. gupta. parallel computer architecture. a hardware/software approach. morgan kaufmann publishers, sept. 1998. isbn: 9781558603431. [2] g. evenden, f. warmerdam, et al. proj.4 – cartographic projections library. [online]. http://proj.osgeo.org/. may 2015. [3] m. flynn. “some computer organizations and their effectiveness”. in: computers, ieee transactions on c-21.9 (sept. 1972), pp. 948–960. issn: 0018-9340. doi: 10 . 1109/tc.1972.5009071. [4] c. f. gauss, j. c. morehead, and a. m. hiltebeitel. general investigations of curved surfaces of 1827 and 1825. the princeton university library, 1902, p. 148. [5] gdal development team. gdal – geospatial data abstraction library, version 1.11.2. open source geospatial foundation. 2015. url: http://www.gdal.org. [6] w. d. hillis and g. l. steele jr. “data parallel algorithms”. in: communications of the acm 29.12 (dec. 1986), pp. 1170–1183. issn: 0001-0782. doi: 10.1145/7902.7903. [7] khronos opencl working group. opencl c specification. ed. by lee howes aaftab munshi and bartosz sochacki. 2015. url: https://www.khronos.org/registry/cl/ specs/opencl-2.0-openclc.pdf. [8] khronos opencl working group. the opencl specification, version 1.1. ed. by aaftab munshi. 2011. url: https://www.khronos.org/registry/cl/specs/opencl1.1.pdf. [9] j. d. owens et al. “a survey of general-purpose computation on graphics hardware”. in: computer graphics forum 26.1 (mar. 2007), pp. 80–113. issn: 1467-8659. doi: 10.1111/j.1467-8659.2007.01012.x. geoinformatics fce ctu 15(1), 2016 67 http://www.maptiler.com/ http://dx.doi.org/10.1109/tc.1972.5009071 http://dx.doi.org/10.1109/tc.1972.5009071 http://www.gdal.org http://dx.doi.org/10.1145/7902.7903 https://www.khronos.org/registry/cl/specs/opencl-2.0-openclc.pdf https://www.khronos.org/registry/cl/specs/opencl-2.0-openclc.pdf https://www.khronos.org/registry/cl/specs/opencl-1.1.pdf https://www.khronos.org/registry/cl/specs/opencl-1.1.pdf http://dx.doi.org/10.1111/j.1467-8659.2007.01012.x p. sloup: gpu-accelerated raster map reprojection [10] r. stöckli et al. the blue marble next generation a true color earth dataset including seasonal dynamics from modis. published by the nasa earth observatory. 2005. url: http://visibleearth.nasa.gov/view.php?id=73751. geoinformatics fce ctu 15(1), 2016 68 http://visibleearth.nasa.gov/view.php?id=73751 adjustment and testing comparison of absolute gravimeters in november 2013 adjustment and testing comparison of absolute gravimeters in november 2013 alena pešková∗ and martin štroner czech technical university in prague, czech republic ∗corresponing author: alena.peskova@fsv.cvut.cz abstract. this paper is focused on a comparison measurement processing of absolute gravimeters in 2013. the comparison deals with a number of various types of absolute gravimeters and includes also an absolute gravimeter from geodetic observatory pecný. comparative measurements are performed to detect systematic errors of gravimeters. a result of processing is most likely value of a gravity and a systematic error of individual devices. measured values are input to a adjustment with condition a sum of systematic errors is zero. a part of this process is also verification following output: (i) value of a posteriori standard deviation, (ii) size of corrections and (iii) statistical significance of systematic errors. the results of adjustment are 15 gravity values on the reference places and 25 systematic errors of measuring instruments. result shows that the presence of systematic errors in measurements is not statistically provable because the systematic errors are similarly sized as their standard deviation. keywords: absolute gravimeter; adjustment with condition; systematic error; gravity. 1. introduction since 1981 international comparisons of absolute gravimeters are regularly hold, a goal is to detect systematic errors of these gravimeters [10]. for this purpose repeated measurements of absolute gravimeters on reference places in laboratory and also measurement by superconducting gravimeter in the same time are used. in a case of this study an analysis of a comparison measurement which took place in november 2013 in the underground laboratory for geodynamics in walferdange, luxembourg is applied. twenty-five gravimeters participated in the comparison including a new prototype from china. the goal of this study is to determine gravity value at the individual places of measurement and based on this to estimate a systematic error of individual devices and a standard deviation of it’s measurement. an integral part of the calculation is an individual gravimeters accuracy verification. during the measurement event there was also used one absolute gravimeter fg5 from geodetic observatory pecný, the czech republic. an absolute gravimeters are used for periodic measurement on geodetic gravity points, in such a case and its change are determined. measurement results are used for creating of a geoid model which is important for an elevation determination. by analyzing a gravity field of earth it is possible to localize failure in homogenity of the earth’s crust and it can be e.g. used for locating deposits of mineral resources. 2. the current state of the problem the current accuracy determination of gravity value is approximately 1 ∗ 10 − 7 gal (1gal = 1cm/s2 in si units) and it has been achieved thanks to a gradual improvement of measurement techniques. the only way how to estimate accuracy is a comparison of devices between each geoinformatics fce ctu 16(1), 2017, doi:10.14311/gi.16.1.5 79 http://orcid.org/0000-0002-3338-3779 http://orcid.org/0000-0003-0070-7172 https://doi.org/10.14311/gi.16.1.5 http://creativecommons.org/licenses/by/4.0/ a. pešková and m. štroner: adjsutment and testing of absolute gravimeters other on a single base. these international comparisons of absolute gravimeters (icag) have been regularly organized since 1981. the first eight comparisons have taken place in bureau international des poids et mesures in france and since 2003, the other comparisons have taken place in walferdange in luxembourg [17]. “free-fall” method is a main method for gravity measurement since 2004 [12]. 3. used instruments there were used seven types of absolute gravimeters for comparison, these instruments used three methods of determination of the gravity value. during the whole comparison measurement a gravity change was measured by superconducting gravimeter. following paragraphs presents a brief description of gravimeters and a principle of a gravity determination. 3.1. free-fall gravimeter this type of gravimeter measures a gravity value by tracking of the free fall trajectory (especially the time) in the vacuum. interference rings are produce by a falling optical prism in a drop chamber and this optical output is converted to electrical signal. the position is obtained by counting and measuring of rings’ occurrence time, the possition is a function of time which is then used for the gravity value determination [13]. the most common gravimeters are represented by fg5, there were 13 devices of this type. one result of measurement by gravimeter fg5 is calculated using at least 600 recorded pairs of position and time while 100-200 falls during one hour can be done [14]. this detail is a good estimate of comparison measurement duration. more technical specifications of this gravimeter are available on the manufacturer’s website http://www.microglacoste.com/fg5.php. the gravimeter fg5x is the second most used absolute gravimeter in the experiment. there were 6 devices of this gravimeter type in the comparison measurement. this gravimeter is an improved version of type fg5, where the improvement consists of an improved drop chamber for free-fall and a better electronic control [12]. an advantage of gravimeter a10 is a device optimalization for its using outside laboratory conditions, for example greater compactness and lower weight. the gravimeter consist of a drop chamber and an interferometer. the interferometer contains a laser which is capable to emitt laser beams of two different wavelengths. gravity measurement uses the same observation principle of mass fall by laser’s interferometer as previous devices. the resulting gravity is determined as a mean of two laser beams measurement [11]. details are available on the manufacturer’s website http://www.microglacoste.com/a10.php. in laboratory conditions is performed 100 fall during one hour of measurement for determining gravity [12]. the chinese gravimeters nim-3a and t-2 belong between two new prototypes. gravimeter nim-3a is constructed for long time continuous observation, difference lies in the rotating drop chamber along the horizontal axis. free fall of optical prism is caused by turning a drop chamber to the vertical position. fall parameters are determined by the interferometer [8]. 3.2. atomic gravimeter the gravimeter cag-01 operates on an atom release basis; its acceleration is determined by the interferometry [2]. free fall of atom is conducted in the space between two laser beams geoinformatics fce ctu 16(1), 2017 80 a. pešková and m. štroner: adjsutment and testing of absolute gravimeters figure 1: absolute gravimeter fg5 n.215, geodetic observatory pecný. which determines its trajectory position. the gravimeter consists of two parts a gravimeter core is formed by a seismometer and a vacuum chamber placed in a magnetic cylinder; the second part is composed of an optical and an electronic bench. as gravimeter precision is influenced by ambient conditions, e.g. ground vibrations, this gravimeter belongs to group of laboratory gravimeters [5]. 3.3. rise-and-fall gravimeter the italian gravimeter imgc02 is the only gravimeter working on the principle “rise-and-fall”. gravity is determined by measuring of the ascending and the descending symmetric vertical trajectory of the body. the measuring set consists of 200 starts approximately [9, 1]. the device allows a speed about 120 starts during the hour. gravimeter consists of five parts a catapult in a vacuum chamber, an interferometer, a laser, a photodetector and a supporting frame [4]. table 1: gravimeter accuracy provided by operators. absolute gravimeter uncertainty [µgal] fg5 1.8-2.6 fg5x 2.0-2.3 a10-006 10.7 a10-020 5.2-5.5 imgc02 5.3 cag-01 5.3-5.4 nim-3a 4.9-5.2 t-2 5.0 geoinformatics fce ctu 16(1), 2017 81 a. pešková and m. štroner: adjsutment and testing of absolute gravimeters figure 2: absolute gravimeter imgc-02. 3.4. superconducting gravimeter the superconducting gravimeter osg-ct040 was used for monitoring gravity changes during the whole comparison measurement. the monitored mass is located in a magnetic field generated by superconducting magnets. a gravity change causes the monitored mass movement; this movement is monitored by the sensor and electromagnetically compensated. an accuracy of a gravity change determination is given to 0.1-0.4 µgal [16, 15]. 3.5. relative gravimeter relative gravimeters are used only to determinate a gravity change between places. the gravity change is given by voltage value. this voltage is supplied on compensator plates while the distance between plates stays constant. the measurement was used to determine coefficients a, b being part of the calculation of vertical gravity gradient. this gradient is used for correction gravimeter measurement made in various heights to the reference height. the resolution of relative gravimeter is 1µgal and the accuracy is <5µgal[3]. 4. measurement procedure there were used 15 reference measurement places in underground laboratory (fig. 3). the schedule has been designed in order to all 25 gravimeters could have measured successively on 3 reference places. the gravimeter fg5 n.242 measured only once on one reference place because it had a serious malfunction without any repairing possibility. there were carried out 4-6 measurements by each gravimeter on one reference place. the measurement event was divided into two stages. the first stage taked place 5.-7. november 2013 and the second stage taked place 12.-14. november 2013. an exception is the gravimeter cag-01 because it needed more space for measurement. this device measured in advance, 24.-28.october 2013. geoinformatics fce ctu 16(1), 2017 82 a. pešková and m. štroner: adjsutment and testing of absolute gravimeters the second exception was the gravimeter fg5 n.223 which couldn’t measure at november so that alternative date was set up in february 2014 for measurement by this device. c2 c1 b5 b4 b3 b2 b1 a5 a4 a3 a2 a1 platform c platform b platform a c5 c4 c3 figure 3: sketch of the underground laboratory for geodynamics in walferdange. the superconducting gravimeter osg-ct040 recorded the gravity changes during measurement by absolute gravimeters. these measurements was adjusted by the same corrections of the earth and atmospheric tides as measurement by absolute gravimeters. all of the measurements were moved to the same date due to settings of correction; the correction was set to zero on 9th november 2013. operators provided acceleration averages of free fall at different height adjusted by corrections, standard deviations of averages and uncertainties. the uncertainties include all known device’s contributions [6]. 5. preparation for processing the operators rectified their data using following corrections: 1. the earth tides including the ocean tides and associated influences 2. the atmospheric tides and associated influences with input factor −0.3 µgal/hpa to difference between atmospheric pressure from standard measurement model and local atmospheric pressure 3. the impact of polar motion is estimated from position of the poles, international earth rotation and reference systems service (iers) provided the data on polar motion 4. vertical gravity gradient 5. the other known devices influences, e.g. influence of their self-attraction, laser beam diffraction correction, etc. the operators provided the data which were not in the same ordinary height due to the data were transferred to reference height 1.3m. the reference heights are different for various types of devices. the vertical gravity gradient was used after transferring of data from operators to reference height. it was used polynomial (1). g(z) = a·z2 + b·z + c (1) the coefficients a,b were different for various reference places in laboratory and it were determined by a measurement. the coefficient c was neglected. these coefficients were determined by the measurement in november 2013, one week after the comparison measurement event. geoinformatics fce ctu 16(1), 2017 83 a. pešková and m. štroner: adjsutment and testing of absolute gravimeters the relative gravimeters cg5#008 and cg5#542 were used for this measurement, which taked place on all of the places in laboratory and also in three heights 0.26m, 0.86m and 1.27m. the polynomial allows transfer the value of gravity along the vertical axis to reference height. before the calculation by the least squares method (lsm) it was checked deployment of the vertical gravity gradient (using repeated calculation) which was used due to transfer data to reference height 1.3m. to simplify a calculation the measured gravity value is reduced by the constant value 980960000.0 µgal. 6. systematic errors determination of a measuring instruments determination of gravity value on reference places in the laboratory is the result of processing measured data. another result is an estimation of systematic errors of individual absolute gravimeters with theirs standard deviations. the used calculation method is the lsm. 6.1. the least squares method adjustment of the measurement the measured gravity on various points of the same height and uncertainty of gravimeters from operators are inputs into the least squares adjustment with conditions. observation equation has the form: gik = gk + δi + �ik, (2) where gik represents gravity value on place k measured by the device i, gk is adjusted value on place k, δi represents a systematic error of a measuring instrument and �ik is a random error. the adjustment is performed according to [7] with contidion:∑ δi = 0. (3) the weight matrix is created by an inverse square of uncertainties gravimeters 1/σ2. the adjusted values are gravity on places and systematic error of a measuring instruments. the weight matrix p has a dimension 73x73, there are inverse squares of uncertainties for each gravimeter on a diagonal axis: p =   1 σ12 0 0 0 0 1 σ22 0 0 0 0 ... 0 0 0 0 1 σ732   . (4) the design matrix a1 has a dimension 73 rows by a count of measurement and 40 columns by a count unknown gravity on places (a1-c5) and systematic errors of measuring instruments (δ1 − δ25). a1 =   ∂x1 ∂a1 ∂x1 ∂a2 ... ∂x1 ∂a5 ∂x1 ∂b1 ... ∂x1 ∂b5 ∂x1 ∂c1 ... ∂x1 ∂c5 ∂x1 ∂δ1 ... ∂x1 ∂δ25 ∂x2 ∂a1 ∂x2 ∂a2 ... ∂x2 ∂a5 ∂x2 ∂b1 ... ∂x2 ∂b5 ∂x2 ∂c1 ... ∂x2 ∂c5 ∂x2 ∂δ1 ... ∂x2 ∂δ25 ... ... ... ... ... ... ... ... ... ... ... ... ... ∂x73 ∂a1 ∂x73 ∂a2 ... ∂x73 ∂a5 ∂x73 ∂b1 ... ∂x73 ∂b5 ∂x73 ∂c1 ... ∂x73 ∂c5 ∂x73 ∂δ1 ... ∂x73 ∂δ25   (5) geoinformatics fce ctu 16(1), 2017 84 a. pešková and m. štroner: adjsutment and testing of absolute gravimeters a vector b complemented the matrix a1 so that the condition zero sum of systematic error was fulfilled. the matrix result of normal equations a has the form: a = [ a1t ·p·a1 b bt 0 ] . (6) the vector b is a column vector with a dimension 40x1, where the first 15 rows are filled by 0 (reference places) and other 25 rows is filled by 1 (using gravimeters), thus guarantees the condition fulfillment mentioned above. a vector of measured values l0 has a dimension 73x1 and it is also modified into a new shape l: l0 =   l1 l2 ... l73   , (7) l = [ a1t ·p·l0 b ] . (8) based on the matrices and vectors described before it is possible to set a system of normal equations for adjustment using conditions for unknowns:( a1t ·p·a1 b bt 0 )( dx k ) + ( a1t ·p·l0 b ) = 0. (9) the futher calculation procedure is the same as the classic adjustment [7]. the adjustments by lsm method provide 15 values of gravity on reference places and 25 systematic errors of a measuring instruments. the results are in following tables 2 and 3: 6.2. the result verification and testing the results of adjustment were checked and tested by these following ways according to [7]. the first assessed quantity was the value of a posteriori standard deviation, for this purpose two-sided test χ2 was used. the zero hypothesis: h0 : σapriori = σaposteriori (10) the tested values are σpriori = 1 and σposteriori = 0.7998. the zero hypothesis h0 is not rejected based on the test for level of importance 5%. it means the aposteriori standard deviation coincides with apriori standard deviation. this fact confirms that the accuracy of measurement corresponds to an assumption and the apriori standard deviation can be used for other needs. the posteriori standard deviation value was also tested using permitted sample standard deviation for the probability 95%: sm = σ·(1 + √ 2 n′ ), (11) geoinformatics fce ctu 16(1), 2017 85 a. pešková and m. štroner: adjsutment and testing of absolute gravimeters table 2: the gravity on reference places [µgal] with the subtracted constant value 980 960 000.0 µgal and their standard deviations. reference place gravity standard deviation a1 4228.6 1.6 a2 4215.8 1.6 a3 4206.5 1.3 a4 4190.0 1.1 a5 4183.2 1.5 b1 4077.4 1.5 b2 4071.8 1.2 b3 4068.8 1.3 b4 4062.4 1.5 b5 4049.4 1.3 c1 3951.4 1.2 c2 3945.9 1.8 c3 3948.6 1.2 c4 3946.4 1.4 c5 3942.7 1.3 where n′ = 33 is a number of redundant observations. the permitted selection deviation was not exceeded, σapriori < σm. the second assessment quantities were the size of corrections evaluated by permitted correction: vm = σv·up, (12) where σv = σ0·(p−1 − a·(at ·p·a)−1·at )ii (13) up = 2 the probability 95% and up = 2.5 the probability 99%. the permitted values were not exceeded for any selected value up. the histogram of standardized corrections shows how the data set corresponds to the normal distribution. the left side of histogram does not fit to the normal distribution as well as the right side of histogram, it can be caused by a relatively small number of observations. the last assessment quantity was the systematic errors of gravimeters. the testing was performed using permitted deviation: δm = qx·up (14) for up = 2 and up = 2.5, where qx are standard deviations of each calculated systematic error. the vector qx has a dimension 25x1 and it’s elements are the square rooted elements of the diagonal of covariance matrix qxx. assuming, that apriori unit standard deviation was choosen to be 1, the covariance matrix qxx is determined by the following equation: qxx = ( a1t ·p·a1 b bt 0 )′ . (15) geoinformatics fce ctu 16(1), 2017 86 a. pešková and m. štroner: adjsutment and testing of absolute gravimeters table 3: the systematic error of a measuring instruments and standard deviations, [µgal]. gravimeter systematic error standard deviation of the systematic error a10-006 -2.7 6.0 a10-020 -4.2 3.1 cag-01 7.0 3.1 fg5-102 -5.4 1.4 fg5-202 2.1 1.5 fg5-206 -2.4 1.6 fg5-213 -3.5 1.6 fg5-215 0.4 1.5 fg5-218 1.1 1.3 fg5-223 1.4 1.5 fg5-228 -3.6 1.3 fg5-231 -1.5 1.5 fg5-233 1.9 1.6 fg5-234 1.9 1.4 fg5-242 0.9 2.9 fg5-301 -1.4 1.4 fg5x-104 -0.8 1.4 fg5x-209 -1.9 1.4 fg5x-216 -0.7 1.4 fg5x-220 2.1 1.5 fg5x-221 1.4 1.6 fg5x-302 0.1 1.5 imgc02 -1.6 3.1 nim-3a 1.0 2.9 t-2 8.3 2.9 for value up = 2 the permitted deviation has been exceeded 5x, for value up = 2.5 the permitted deviation has been exceeded 3x. this exceeding is statistically insignificant in most cases. the systematic errors are similarly sized as their standard deviation. based on this fact the occurrence of systematic errors is not proven in the measurement. 7. conclusion the results of adjustment are 15 gravity values on the reference places and 25 systematic errors of measuring instruments. the systematic errors of devices correspond to the size of their standard deviations and therefore can not be considered to be proved. the problem solved struggles with the relatively small number of redundant observations. we determined 40 unknowns (15 gravity value on places and 25 sytematic errors of measuring instruments) from the 73 measured values; therefore there are only 33 redundant observations. the number of redundant observations can be increased by a larger number of repetitions on reference places, e.g. to increase repetition number from 3 to 4; this change would lead to an extension of time demands by a third. the measurement would take eight days instead of the original geoinformatics fce ctu 16(1), 2017 87 a. pešková and m. štroner: adjsutment and testing of absolute gravimeters figure 4: the histogram of standardized corrections. 6 days. acknowledgements this work was supported by the grant agency of the czech technical university in prague, grant no. sgs16/062/ohk1/1t/11 and it used data set from ccm.g-k2 key comparison and pilot study. references [1] g. barbato et al. “treatment of experimental data with discordant observations. issues in empirical identification of distribution”. in: measurement science review 12.4 (2012), pp. 133–140. doi: 10.2478/v10048-012-0020-y. [2] yannick bidel et al. “compact cold atom gravimeter for field applications”. in: applied physics letters 102.14 (apr. 2013), p. 144107. doi: 10.1063/1.4801756. [3] cg-5 autograv gravity meter. canada. url: http://www.scintrexltd.com/gravity. html. [4] g. dagostino et al. “the new imgc-02 transportable absolute gravimeter. measurement apparatus and applications in geophysics and volcanology”. in: annals of geophysics 51.1 (2008), pp. 39–49. doi: 10.4401/ag-3038. [5] t. farah et al. “underground operation at best sensitivity of the mobile lne-syrte cold atom gravimeter”. in: gyroscopy and navigation vol. 5.issue 4 (2014), pp. 266–274. issn: 2075-1087. doi: 10.1134/s2075108714040051. geoinformatics fce ctu 16(1), 2017 88 https://doi.org/10.2478/v10048-012-0020-y https://doi.org/10.1063/1.4801756 http://www.scintrexltd.com/gravity.html http://www.scintrexltd.com/gravity.html https://doi.org/10.4401/ag-3038 https://doi.org/10.1134/s2075108714040051 a. pešková and m. štroner: adjsutment and testing of absolute gravimeters [6] olivier francis, henri baumann, et al. international compari of absolute gravimeters: ccm.g-k2 key comparison. tech. rep. university of luxemburg, federal institute of metrology metas, nov. 2014. url: http : / / www . bipm . org / utils / common / pdf / final_reports/m/g-k2/ccm.g-k2.pdf. [7] miroslav hampacher and martin štroner. zpracování a analýza měření v inženýrské geodézii. vyd. 1. praha: české vysoké učení technické v praze, 2011. isbn: 978-80-0104900-6. [8] h. hanada et al. “new design of absolute gravimeter for continuous observations”. in: review of scientific instruments vol. 58.issue 4 (1987), pp. 669–673. issn: 0034-6748. doi: 10.1063/1.1139237. [9] il gravimetro imgc-02. italy. url: http://www.nanospin.eu/res/grav/grav.html. [10] z jiang et al. “the 8th international comparison of absolute gravimeters 2009. the first key comparison (ccm.g-k1) in the field of absolute gravimetry”. in: metrologia 49.6 (dec. 2012), pp. 666–684. issn: 0026-1394. doi: 10.1088/0026-1394/49/6/666. [11] takahito kazama et al. “gravity measurements with a portable absolute gravimeter a10 in syowa station and langhovde, east antarctica”. in: polar science vol. 7.3-4 (2013), pp. 260–277. issn: 18739652. doi: 10.1016/j.polar.2013.07.001. [12] micro-g lacoste. a division of lrs. url: http://www.microglacoste.com. [13] t.m. niebauer et al. “a new generation of absolute gravimeters”. in: metrologia 32.3 (1995), pp. 159–180. [14] vojtech pálinkáš, jakub kostelecký, and miloš val’ko. “charakteristiky přesnosti absolutního gravimetru fg5 č. 215”. in: geodetický a kartografický obzor 58/100.5 (2012), pp. 97–102. issn: 0016-7096. [15] superconducting gravity sensors. 2015. url: http://www.gwrinstruments.com/iosgsuperconducting-gravity-meters.html. [16] supravodivý gravimetr osg-050. 2010. url: http://oko.pecny.cz/pecny/supgrav. html. [17] o de viron, m van camp, and o francis. “revisiting absolute gravimeter intercomparisons”. in: metrologia 48.5 (oct. 2011), pp. 290–298. issn: 0026-1394. doi: 10.1088/ 0026-1394/48/5/008. geoinformatics fce ctu 16(1), 2017 89 http://www.bipm.org/utils/common/pdf/final_reports/m/g-k2/ccm.g-k2.pdf http://www.bipm.org/utils/common/pdf/final_reports/m/g-k2/ccm.g-k2.pdf https://doi.org/10.1063/1.1139237 http://www.nanospin.eu/res/grav/grav.html https://doi.org/10.1088/0026-1394/49/6/666 https://doi.org/10.1016/j.polar.2013.07.001 http://www.microglacoste.com http://www.gwrinstruments.com/iosg-superconducting-gravity-meters.html http://www.gwrinstruments.com/iosg-superconducting-gravity-meters.html http://oko.pecny.cz/pecny/supgrav.html http://oko.pecny.cz/pecny/supgrav.html https://doi.org/10.1088/0026-1394/48/5/008 https://doi.org/10.1088/0026-1394/48/5/008 geoinformatics fce ctu 16(1), 2017 90 a. pešková and m. štroner: adjsutment and testing of absolute gravimeters introduction the current state of the problem used instruments free-fall gravimeter atomic gravimeter rise-and-fall gravimeter superconducting gravimeter relative gravimeter measurement procedure preparation for processing systematic errors determination of a measuring instruments the least squares method adjustment of the measurement the result verification and testing conclusion geoeasy an open source project for surveying calculations geoeasy an open source project for surveying calculations zoltan siki department of geodesy and surveying, budapest university of technology and economics in budapest, hungary siki.zoltan@epito.bme.hu abstract. geoeasy is a complex tool for land surveyors to calculate coordinates from observations. it is available on linux and windows platforms, too. a graphical user interface (gui) makes the program user friendly. besides the basic surveying calculations like intersection, resection, traversing, etc. modules extend its functionality to network adjustment, digital terrain models and regression calculations. the program supports several input and output formats, so it can easily be inserted into user’s work-flow. this paper introduces some functionalities of the software. from version 3.0 geoeasy became free software under gpl license. volunteers (testers, document writers, developers) are welcome! keywords: surveying calculation; network adjustment; dtm; regression calculation; open source. 1. introduction though geoeasy is quite new among the open source software, it has a longer history as a proprietary program. in the late nineties there were demands for a software for surveying calculations which is hungarian localized. at that time tcl/tk (tool command language/toolkit) [1] script language was a popular graphical programming environment working for both linux and windows platforms, so this language was chosen in 1997 for geoeasy, the software which i started to develop for surveying calculations. during the last twenty years with less or greater intensity the development continued. nowadays many hungarian small enterprises and our university use it. in 2017 it became free software under gpl license, freely available for everybody. the source code repository can be found on github [9], binary releases for windows (32/64 bit) and linux (64bit only) can be downloaded from the home page of the project [5]. 2. base conception objectives of the development were to create a software with user friendly graphical user interface (gui) for surveyors in a modular structure with flexible, open connections to other programs. educational purposes were also considered beside productivity. there are step by step and automatic (several calculation made in a single step) solutions for the same task. for example users can calculate orientations station by station manually selecting from the possible backsight directions or all orientations can be done in a single automatic process. horizontal coordinates are supposed to be plane coordinates in the same projected coordinate reference system (crs). geoeasy utilizes other open source projects and several open data formats. gnu gama [3, 2] is used for geodetic network adjustment, triangle [8] is used to generate triangulated irregular geoinformatics fce ctu 17(2), 2018, doi:10.14311/gi.17.2.1 1 http://orcid.org/0000-0002-9615-181x https://doi.org/10.14311/gi.17.2.1 http://creativecommons.org/licenses/by/4.0/ z. siki: geoeasy an open source project for surveying calculations networks (tin). these programs are used through file data exchange. geoeasy has interfaces to these cli programs which generate the input file(s) for the external tools and read back their output. other open source tools are used to create binary releases and install kits. tcl cruncher generates the compressed tcl source code, removing comments, freewrap [6] wraps the tcl code into an executable linux/windows file, nullsoft scriptable install system (nsis) [7] makes the windows install kit. the possible import/export formats are listed in figure 1. all import and export formats use different text files, which are platform independent. some of the input files come from total stations (leica gsi, idx; sokkia scr, sdr; topcon 700, 210; trimble m5; geodimeter job/are). the other input files can come from other programs (surv ce rw5, autocad dxf, excel csv, arcgis ascii grid, n4ce). the imported field-books can be postprocessed and edited in geoeasy. the export formats on one hand can be used to upload coordinates to the instrument in a vendor specific format. on the other hand the export formats are useful to import data into other programs (autocad, googleearth, gps trackmaker). figure 1: data import/export formats. an extendable, flexible native data structure was developed for geoeasy to store field-book and coordinate data. it is based on tcl lists and associative arrays. each field-book is an array, an array element contains station or observation data for a point. the order of observations are kept by integer array indexes. each array item is a list of sublists, a sublist contains two items, the first is an integer code and the second is a value for that code (see figure 2). the codes are taken from geodimeter job/are format and extended by necessary extra codes. an observation record must have 2 (station id) or 62 (backsight id) or 5 (point id) codes. other code/value pairs are optional, for example 3 (instrument height), 6 (signal height), 7 (horizontal angle), 8 (vertical angle), 9 (slope distance). during the development new codes were added without refactoring the program code. a very similar structure is used for coordinate lists. the only difference is in the array indexes geoinformatics fce ctu 17(2), 2018 2 z. siki: geoeasy an open source project for surveying calculations figure 2: field-book data structure. and the used codes. this case associative arrays are used where the index is the point id, which must be unique in the array. a third type of data is the parameter file. it is a single list of code value pairs defining some general information for a field-book (e.g. date, user name, etc.). several data sets can be loaded and used for the calculations in a session, which are handled to be in the same spatial reference system. the currently loaded field-books and coordinate lists are stored in memory. the coordinate lists can contain two types of coordinates, preliminary and final coordinates. preliminary coordinates are used not only for network adjustments but to display points in the graphic window. this way the user can visualize and select point(s) in the graphic window before deciding the coordinate calculation method(s). the gui consists of several windows which is usual on linux but can be unusual for windows users. it more resembles to gimp (gnu image manipulation program) user interface. starting the program two windows will be opened, the main window contains a menu and few graphics, and the calculation results window will hold the calculation results and program log messages. users can open separate windows to edit field-books and coordinate lists, to display points and observations in a map view (see figure 3). the program can handle several templates to visualize field-books and coordinate lists. the templates consist of column definitions which include codes for values to display and units to convert to. beside the build in templates user defined templates can be added, too. dms, gon (grad), deg angle units and meter, feet, fathom length units are supported. users can change, extend standard mask definitions or can create separate files for custom mask definitions and load them on demand. finally it would be mentioned among base concepts that calculations are repeatable. after geoinformatics fce ctu 17(2), 2018 3 z. siki: geoeasy an open source project for surveying calculations figure 3: different windows of the program. changing the observations, coordinates or parameters users can repeat any calculation, but it is not automatic. calculation results are sent to different targets. calculation details can be seen in the calculation results window if it is open and always sent to the log file (geo_easy.log) in the installation directory. 3. using the software 3.1. the user interface the user interface of the program consists of menus, sub-menus of the different windows and modal dialog boxes. some types of windows, the field-book, the coordinate list and the graphic windows have pop-up menus, too. the graphic windows also have a toolbar. before starting the work in geoeasy data have to be loaded. input sources are: • electronic field-book downloaded from a total station • text file created by other programs • manual data input through the gui any number of geoeasy data sets from different sources can be opened in the same session. the user can change the used template (displayed columns and units) for the opened fieldbook, for example the angle units can be changed from dms to gon changing the template. the content of the field-books and coordinate lists can be edited. users can open more (maximum 10) graphic windows. points from all loaded data sets are displayed in a graphic window. besides point markers and point ids the observations are displayed as lines, an arrow shows the direction of the measurement. the fill color of the point markers have a special meaning, green means oriented station, red not oriented station, geoinformatics fce ctu 17(2), 2018 4 z. siki: geoeasy an open source project for surveying calculations white no station. point id label is red if only preliminary horizontal coordinates are available for a point. the content of the graphic windows can be set on a window base and the used colors can also be customized. 3.2. simple calculations most of the basic coordinate calculations are involved in geoeasy. there are calculations for the coordinates of a single point (e.g. intersection, resection). these calculations can be selected from the pop-up menu, right click on the point in any window and select from the different calculation methods. those methods which are not available (e.g. there are not enough observations) are greyed out. calculation methods which calculate the coordinates of more points (for example traversing) can be found in the calculation menu. traverse lines can also be given in the graphic window using the traversing tool from the toolbar. two dimensional coordinate transformations are also available in geoeasy and are based on common points in two projected spatial reference systems. these transformations can be used only for a limited area of the surface of the earth, usually few square kilometers. transformation is made between a source data set, which has to be loaded and a selected target data set. depending on the number of common points in the two data sets with horizontal coordinates, users can select orthogonal (three or four parameters), affine and polynomial (second and third order) transformations. not only the transformed coordinates but the transformation parameters can also be saved and can be applied for other data sets. in the result list of a transformation the residuals and rms are calculated. 3.3. network adjustment most of the work in network adjustment is done by gnu gama. geoeasy only prepares the data for adjustment. all loaded data sets are considered in the adjustment calculations. though gama-local can calculate preliminary coordinates, in geoeasy the preliminary coordinates and orientations have to be calculated before network adjustment. it can be an automatic process, preliminary coordinates from the calculation menu. the aim of this is to give more control into the hand of the user. on one side points with preliminary coordinates can be seen in the graphic window, users can visually check the shape of the network before adjustment. on the other side geoeasy can compare the observed values to the calculated values from the preliminary coordinates before adjustment and sends warning if the difference is too large. preliminary coordinate calculation is made in an iteration. first approximate orientations are calculated, in this case points with preliminary coordinates are also considered as backsight directions. than different trigonometric calculations (polar, resection, intersection, arcsection, etc.) are used for coordinate calculations. after new coordinates are calculated for a point, new orientation calculations are tried again, followed by coordinate calculations again, until no more new orientations and coordinates can be calculated. similar iteration is used for preliminary elevations. at the end of the process users get a list of points, what horizontal coordinates or elevations cannot be calculated for. a priori standard deviation of observations can be set by data sets in the dialog box of the observation parameters from the edit menu of the main window or globally in the dialog box of the calculation parameters from the file menu of the main window. global a priori standard deviations are only used if not set in the calculation parameters of the data geoinformatics fce ctu 17(2), 2018 5 z. siki: geoeasy an open source project for surveying calculations set. standard deviations for short directions are handled specially if the distance limit value is set in the dialog box of the adjustment parameters from the file menu of the main window. if mean direction or zenith angle is measured between points closer to each other than the limit, the standard deviations are increased (multiplied) by the ratio of the distance limit and the distance of the two points. the default value for distance limit is 200 meters. users can select 1d/2d/3d network adjustment from the calculation menu. observations which are related to a point without coordinates are not sent to gnu gama xml file for adjustment. free network is calculated if all points are marked as unknown. adjusted coordinates and orientation angles are read back from gnu gama xml output. blunders marked by gnu gama are not automatically removed from adjustment. users have to manually remove the observation with the largest statistic above critical from the field-book and run the adjustment again. repeat this process until all blunders are removed. 3.4. regression calculation regression calculation is useful in geometrical control of engineering objects, structures. for example using observed points the following questions can be answered. is the wall vertical or is the new tunnel circular? the best fitting geometry for the selected points can be calculated with this module using least squares method. some linear and non linear shapes can be selected in 2d or 3d. all points are considered with unit weight in the adjustment calculation. supported geometries are 2d line (three variants), 2d circle, general plane (two variants), horizontal plane, vertical plane, 3d line and sphere. results are shown in the calculation results window (see figure 4) and stored in the log file. besides the parameters of the geometry the distances from the best fitting shape to the given points and rms are calculated. rms (root mean square) is a measure of the imperfection of the regression shape to the points. figure 4: circle regression results. geoinformatics fce ctu 17(2), 2018 6 z. siki: geoeasy an open source project for surveying calculations 3.5. digital terrain models (dtm) the dtm module generates triangulated irregular networks (tin) using the open source triangle project. not only scattered points but break lines can also be input for dtms, so constrained delaunay triangulation is made. dtm functionality is only available from the menu of the graphic window. users can select from two input sources. one source is the points in the loaded coordinate lists and the break lines drawn in the graphic window. other source can be an autocad dxf file with 3d points and 3d break lines. the generated dtm is saved into three text files (.pnt, .pol and .dtm). users can only load/use one dtm at a time. triangles are displayed in the graphic window, too. points, break lines and triangles can be deleted from a loaded dtm and point, break lines can be added to it. the tin has to be regenerated after adding or deleting items to it. besides the dtm generation this module adds more functionalities to geoeasy. volumes can be calculated between the tin and a horizontal reference plane or between two tins. in the later case grids are generated from the two tins for the volume difference calculation. contour lines and profiles can be generated and exported to autodesk dxf format. tins can be exported into various formats (kml, esri ascii grid, vrml) to visualize them in 3d. 3.6. comeasy the comeasy module is responsible for the communication between the instrument and the computer. jobs stored in the memory of the total station can be downloaded for data processing or coordinates can be uploaded for setting out on the field. wired (rs-232) connections are supported, which are available on most total stations. from the beginning of the project it was released under open source license. 4. localization actually the gui has english and hungarian translations. english version is used in the english courses at our university. all messages are collected into a single tcl source file which extension refers to the language (e.g. geo_easy.eng for english strings, geo_easy.hun for hungarian strings). users are encouraged to make a copy of the english message file and translate the strings to their mother tongue. there are about seven hundred messages to translate. please send us back your translation or make a pull request on github [9]. 5. future plans the goal by changing the license to gpl was to widen the developing capacity and the user community behind the project. i was inspired by osgeo projects which i took part in as a translator, documenter and coder. i hope developers, experts and users from all over the world will voluntarily be involved in the geoeasy project. issues and pull requests are accepted on the github page [9] of the project. during the first two weeks, after changing the license to gpl, there were more than 200 downloads of the windows installation kit. geoinformatics fce ctu 17(2), 2018 7 z. siki: geoeasy an open source project for surveying calculations geoeasy has a nephew project called surveyingcalculation [10]. it is a qgis plug-in, some parts of geoeasy were rewritten to python. this plug-in is available from the official qgis plug-in repository. there is no clear future plan yet, how to maintain both projects parallel. though the gui of geoeasy has been translated into english, the program contains some hungarian specialties, for example the klm and the gps trackmaker (txt) export works for the hd72/eov (epsg:23700) spatial reference system only. in the development version on github the cs2cs utility of proj.4 library is already used to convert coordinates from several spatial reference systems to wgs84, which is used by kml and gps trackmaker. the weakest part of the project is now the documentation. previously only hungarian documentation has been made. the developer’s documentation [4], the installation guide and a step by step guide for the sample data sets are already available in english from the github repository of the project. acknowledgements thanks to digikom ltd to support the development of geoeasy. i would also like to thank all colleagues and users of geoeasy who helped me with suggestions and error reports. references [1] ken jones brent b. welch. practical programming in tcl/tk. upper saddle river, nj 07458: prentice hall, 2003. isbn: 0-13-038560-3. [2] aleš čepek et al. gnu gama. url: https://www.gnu.org/software/gama/ (visited on 02/06/2018). [3] aleš čepek. “the gnu gama project adjustment of geodetic networks”. in: acta polytechnica 42.3 (mar. 2002), pp. 26–30. url: https://ojs.cvut.cz/ojs/index. php/ap/article/download/350/182. [4] developers documentation. url: http://digikom.hu/tcldoc/ (visited on 01/10/2018). [5] download page for binary releases of geoeasy. url: http://digikom.hu/english/ geo_easy_e.html (visited on 11/08/2017). [6] freewrap. url: http://freewrap.sourceforge.net/ (visited on 11/08/2017). [7] nsis (nullsoft scriptable install system). url: https://sourceforge.net/projects/ nsis/ (visited on 11/08/2017). [8] jonathan richard shewchuk. “triangle: engineering a 2d quality mesh generator and delaunay triangulator”. in: applied computational geometry: towards geometric engineering. ed. by ming c. lin and dinesh manocha. vol. 1148. lecture notes in computer science. from the first acm workshop on applied computational geometry. springer-verlag, may 1996, pp. 203–222. url: https : / / www . cs . cmu . edu / ~quake / tripaper/triangle0.html. [9] source code repository of geoeasy project on github. url: https : / / github . com / zsiki/geoeasy (visited on 11/08/2017). [10] surveying calculation module for qgis. url: https://plugins.qgis.org/plugins/ surveyingcalculation/ (visited on 11/08/2017). geoinformatics fce ctu 17(2), 2018 8 https://www.gnu.org/software/gama/ https://ojs.cvut.cz/ojs/index.php/ap/article/download/350/182 https://ojs.cvut.cz/ojs/index.php/ap/article/download/350/182 http://digikom.hu/tcldoc/ http://digikom.hu/english/geo_easy_e.html http://digikom.hu/english/geo_easy_e.html http://freewrap.sourceforge.net/ https://sourceforge.net/projects/nsis/ https://sourceforge.net/projects/nsis/ https://www.cs.cmu.edu/~quake/tripaper/triangle0.html https://www.cs.cmu.edu/~quake/tripaper/triangle0.html https://github.com/zsiki/geoeasy https://github.com/zsiki/geoeasy https://plugins.qgis.org/plugins/surveyingcalculation/ https://plugins.qgis.org/plugins/surveyingcalculation/ z. siki: geoeasy an open source project for surveying calculations introduction base conception using the software the user interface simple calculations network adjustment regression calculation digital terrain models (dtm) comeasy localization future plans ________________________________________________________________________________ geoinformatics ctu fce 2011 338 surveying and comparing the arco dei gavi and its historical wooden maquette francesco guerra, paolo vernier università iuav di venezia, sdl laboratorio di fotogrammetria, italia guerra2@iuav.it ; vernier@iuav.it keywords: laser scanning, 3d modelling, digital photogrammetry, cultural heritage, virtual anastylosis, preservation, documentation. abstract: actually geometrics’ science offers new opportunities and interesting applications in the field of cultural heritage. these applications are strictly related to preservation, restoration but even to cataloging and reproducing a monument that no longer has its original integrity. the possibility of obtaining 3d data, of such a model close to reality, enables us to realize studies that sometimes are too complex or impossible. the paper will describe the study of a monumental arch, the arco dei gavi, built in verona during the i sec. a.c., that was destroyed in 1805 by the napoleonic army, and its wooden model that was realized in 1813 and it has a very important role concerning the monument’s reconstruction. the purpose is to realize two threedimensional models which can be comparable to each other, two models with recognizable differences, similarities and discontinuities about shapes and single elements that compose the monument. it should also be noted that some original parts of the monument have not been relocated but are badly preserved in a museum: the 3d digital model helps to identify these parts in their original location. the main steps of the work can be summarized in: collecting the historical documentation of arco dei gavi and its representations; identifying proper instruments (laser scanning and photogrammetric hardware and software); surveying the arch and its wooden model; identifying a unique and shared reference system; comparing both digital models related to the same scale; choosing a three-dimensional representation to emphasize the results; reallocation of outstanding pieces (virtual anastylosis). 1. introduction thanks to ever-advancing software and hardware tools, modern geomatics offers new, interesting applications in the cultural heritage field. these applications do not just pertain to the preservation, restoration and reproductionprototyping of monuments. they also deal with the digital, virtual planning of possible structural and conservative interventions on monuments that have been demolished or reconstructed, as well as with the cataloguing and studying of objects of various dimensions. the possibility of obtaining a digital 3d model, at once faithful to reality and untied from its strict bounds, enables us to perform operations and analyses otherwise too complex or impossible. scale models and maquettes evoked a high level of craftsmanship and their perfect details had the effect of crystallizing ideas and anticipating the future. they aided the architect in “knowing the beauty of a building, whose idea he just conceived, before even starting its construction.” we examined the arco dei gavi (fig. 1), an architectural monument dated i century a.c., which was demolished in 1805; and its wooden model (fig. 2), built in 1813, which had a crucial role in the monument's reconstruction. in the case under study, the wooden model has a crucial, specific function, namely, it embodies the database to use for the monument's reconstruction, as well as the tridimensional historic memory of a structure that has been demolished. hence, it will no longer anticipate the future, but it will help the architect to bring the monument back to life. the wooden model was made by a skilled carver, sughi, based on the outcomes of a direct survey of the arch by architect barbieri and from previous surveys by palladio and architect ederle. in 1932, the monument was rebuilt starting from this model, thanks to the intervention of professor avena, director of the city museums at the time. today the numerical, digital model enables us to include all possible “views” in a single representational system and guarantees the same functions of the iconic, diagrammatic and mathematical models. the use of new architectural survey technologies produces a great amount of data that need to be computed in order to create significant and specific digital representations. any form of representation, such as points or surfaces, has to refer to its generating element, namely, its measure. every model can be classified depending on the degree of adherence to its original data. therefore, the numerical model has to adhere as much as possible to the arch and its maquette, in order to compare them without incurring deceptive simplifications or interpretations. mailto:guerra2@iuav.it mailto:vernier@iuav.it ________________________________________________________________________________ geoinformatics ctu fce 2011 339 figure.1,2 arco dei gavi and its wooden maquette. 2. objectives studying these structures, the elements that raised our interest the most were their creation and their story. in theory, they should be the scale representation of the same object, of the same “planning idea.” however, in this particular case the sequence of events is inverted. the purpose of building a wooden model, a scale model, or a digital model is to make tangible a project otherwise too difficult or complex to understand. in this case instead, we have the scale representation of a previously existing monument. in the course of centuries, the monument has been used, modified, surveyed by notable architects such as serlio (fig. 3) and palladio (fig. 4), and finally demolished. hence, what we want to restore and preserve in time is the original aspect of the arch, by building a faithful scale reproduction of the monument. architect barbieri surveyed every single piece of the arch soon after its demolition and hypothesized a reconstruction based on palladio's survey, dated 1500, and prof. ederle's survey, dated shortly before its demolition. thus, the goal is to obtain two numerical models to compare and analyze; two models that enable us to identify differences, equalities and discontinuities in shape and in its composing pieces. figure 3;4. serlio's and palladio's drawings. figure 5; 6. prof. ederle's reconstructive hypothesis thus, the goal is to obtain two numerical models to compare and analyze; two models that enable us to identify differences, equalities and discontinuities in shape and in its composing pieces. obviously, the choice of the equipment used to perform the survey on two objects so different in structure from one another becomes critical. in fact, the wooden model is scale 1:10 (calculated 1:9.7) compared to the original. therefore, it is necessary to use two instruments that give comparable outcomes once compared on the same scale. the two instruments we identified are: the terrestrial laser scanner system riegl lms-z390i with an interfaced metric aerial camera, which enables us to obtain textured triangulated surfaces and 3d orthophotos; and scansystem scanprobe lt for the survey on the wooden model. the precise distance measurement of the first instrument is ±4 mm on a single scan and decreases up to ±2 on a scansequence. the precision of the second instrument ranges between 0.1 and 0.2 mm depending on the calibration. ________________________________________________________________________________ geoinformatics ctu fce 2011 340 3. the survey of the arco dei gavi using terrestrial laser scanner system lms-z390i we planned three scans for each front side and two for the lateral sides. the inner and covered areas of the arch had to be scanned 4/5 times, changing the vertical axis of the instrument, in order to adequately acquire the whole area (fig.8,9). then, we proceeded with the materialization on the ground of two landmarks to realize a topographic base from which to survey the supports for the registration of the clouds. in fact, the object and the surrounding area have been marked with about 25 reflective targets, needed to record the scans in the same reference system. the next step was to identify the area of the first “cloud” of the area to survey and to define the resolution of the laser. the exit angle between an emission and the next will provide a geometric spacing on the object. more precisely, it will provide the means of this spacing. the distance of the various surfaces and their angle relative to the equipment will affect the spacing between two points. the angular value calculated on the point of maximum distance, 0.02 deg., allowed us to obtain a point every 2-3 mm. with these initial parameters we obtained scans made of about 10,000,000 points each. after the metric measurements, we acquired the images with the camera mounted on the laser system. the known position of the digital camera and of the camera perspective centre, the focal length of the lens and the parameters to correct lens distortions constitute a calibrated camera, which provides images with known inner and outer orientation. the clouds' recording procedure establishes a connection between the coordinates of the target, acquired with the laser, with those acquired using the topographic method. this method calculates the least squares on the distances between points to minimize the error and it also automatically recalculates the position matrix and the laser's orientation. the computation's outcome is a standard deviation between 2 and 4 mm on the merging of each individual cloud with the topographic points. after the orientation, we proceeded with the clouds' cleaning with filters to eliminate the outliers, the noise and any other interfering element that could invalidate or “spoil” the metric data. furthermore, we operated a controlled decimation to reduce the data by using the octree algorithm with a resolution close to 5mm, so t hat we were sure not to eliminate points there were important for the geometrical understanding of the arch. figure 8;9. recording with laser scanner of the inner and covered area of the arch figure 10;11. view from the top of the merging of the clouds ________________________________________________________________________________ geoinformatics ctu fce 2011 341 after the clouds' cleaning, to which we attributed the rgb value (fig.10,11), and after their merging, we obtained the first 3d data for the comparison, in other words a tridimensional model for surfaces, the mesh (fig.12). we did not fill the gaps in the mesh caused by shadowed areas, to prevent parts not corresponding to reality to invalidate the analyses and the comparison of the wooden model with the mesh. the metric data obtained served also to reconstruct a simplified model of the arch in a cad environment to obtain a digital database for each individual piece (still under processing). fig.12 mesh of the facade (rapidform2006) 4. the survey on the maquette of the arco dei gavi with digital scanner scanprobe lt the 3d triangulation system used projects onto the object under study eight patterns at increasing frequency, that is, images composed by alternating black and white vertical stripes that warp depending on the surface surveyed. the video camera records the images. projector, video camera and pattern's points represent the vertices of a triangle in the space and, therefore, the spatial position of the points on the profile is derived according to the known triangulation method. for each point, we also recorded the radiometric values of the rgb channels. the system needs to be calibrated after every change in the geometric configuration, or rather in the baseline, which is the distance between sensor and projector. calibration procedures consist of the acquisition of a panel of known geometry, performed from different points, and provide the parameters of the inner and outer orientation of both sensor and projector. knowing these parameters is critical for an accurate triangulation. since this instrument enables us to vary the optogeometric setup, we can modify the baseline's distance depending on the special requirements that each object has in terms of measuring. first, we defined the recording distance in function of the measuring area and the desired points' density on the object. in our case, dimensions were up to 1x1 m and we wanted to obtain highly dense, detailed data to align the clouds, used as control point. to do so, we chose a calibration that allowed the recording of 400x300 mm areas from a distance of 70 cm. with these parameters, accuracy reaches a tenth of a millimeter. once established the system's settings and the number of recordings, we started with the acquisition of the “range map” (that is, a 3d image made out of thousands of points' coordinates), each one describing a single portion of the object's surface (fig.13,14). after this approximate alignment, we performed an alignment using the icp (iterative closest point) algorithm, which entails the iterative minimization of the distance between two discretized surfaces. once all the range maps were properly aligned, we proceeded with their merging to generate a single mesh made of polygons. we could then refine the mesh with editing techniques, such as, smoothing and other filters needed to utilize the data. at this point, it was possible to run direct measurements and analyses, in order to verify if the outcome accurately reproduced the geometric shape of the object acquired. to survey the wooden model, we had to perform about 150 scans, each made by 500,000 points. structure of the object, quality of the scanning and accuracy of the survey required an higher level of control over the recording of the clouds in the same reference system. we used the icp algorithm implemented on geomagic 10 software. we also checked the recording, to avoid the drift problem (beraldin, 2004), by creating a photogrammetric support using imagemodeler 2009, a software for monoscopic digital photogrammetry. to check the recording we created a reference system and a points' set: first we performed the scanning with icp, creating a set of swaths (8 swaths, two for each side, made of 20 scans each); then we recorded and checked these swaths thanks to the photogrammetric control points (fig.15). ________________________________________________________________________________ geoinformatics ctu fce 2011 342 figure 13;14. recording phase and subsequent data display. figure 15. complete merging thanks to the control points of the photogrammetric support purposely, during the editing phases we did not fill the gaps due to a lack of data. we did not want to add computed areas to the real surface of the surveyed object, because these would have been misleading during the subsequent comparisons and analyses between the two facades. therefore, the outcome was a model for surfaces of most of the wooden arch, textured on the rgb value, which was measurable and comparable. the inner part has not been acquired due to the excessive dimensions of the 3d scanner. 5. analyses and comparison after the acquisition of the data and the first phase of post-processing, during which we filtered and cleaned the clouds and we recorded in the same reference system, and after the possible assignation of rgb data, we started the preanalyses phase. the aim of this phase was to verify the compatibility and comparability of the data. before the direct comparison of the models, we did some measurements of the distance on single clouds and on sets of recorded clouds, in order to understand if the two models were similar, geometrical descriptions of the same object. measuring the arch's narrow areas as well as its whole height and width, that is to say invariant data concerning macro and micro areas, enables us to obtain the first valid and encouraging results. in fact, the differences between the measures on the arch and the model were in the vicinity of 1-2 cm. this represents an acceptable difference for a nominal scale of 1:50-1:100 and it is in line with the equipment's accuracy and the estimated accuracy of the 3d model. after having verified the comparability, we proceeded with the triangulation of the data to obtain a 3d model for surfaces. architecture's arch maquette difference height 1,068 m 1,070 m 0,002 m width 1,098 m 1,097 m 0,001 m base's width 0,307 m 0,306 m 0,001 m ________________________________________________________________________________ geoinformatics ctu fce 2011 343 geomagic 10, which is the same software we used for the recordings with icp algorithm of the triangulation scan, has been used to shift from the model for points to the mesh. as an initial area to use for the analysis and the comparison, we chose the facade facing the city which, according to the historical references to it in literature, was also originally facing the old city. the choice was due to the fact that this part offers the highest number of original pieces, including some parts of the tympanum, which is otherwise completely reconstructed. one of the tympanum pieces is not even represented in the wooden model. most likely, architect barbieri did not survey this piece, because it is not represented in his drawings, but has been retrieved afterwards during the reconstruction phase. once having eliminated all the points not belonging to the object, as well as the areas where the data were too noisy due to excessive distance or proximity to the ccd sensor, we calculated and built the mesh of the wooden model. after having cleaned the merging of the three scanning of the architecture, we created a second model for surfaces for the comparison (fig. 16). the reference system defined by the photogrammetric support for the scanning of the wooden model has been used as the sole reference system to use to orient and compare the models for surfaces of the two objects. fig.16: mesh of the two facades. we calculated the reference plane with imagemodeler2009, the monoscopic photogrammetric software, using 12 images, acquired with digital camera d100 20mm with a 6mp cmos sensor, and employing a series of targets. on this plane we physically rested the wooden model and we calculated the cartesian tern as well as a series of points in common between the maquette and the actual arch. since the maquette is solely based on the reproduction of the antique blocks, the points only belong to the parts also present in the arch. therefore, we were able to keep the 3d model of the maquette steady, using as a reference system the one we used for its orientation, and to operate a rototranslation in space with scale variation only of the 3d model of the architecture's arch. effectively, we imported the reference system in the form of tridimensional coordinates using geomagic10, software used for the management of 3d data and for rapid prototyping. on the same file, we imported the mesh of the arch's facade, which had been oriented by assigning to the architectural points, manually selected, the computed coordinates after the scale variation. 5.1 comparison between surfaces and grid of points the first comparison of selected macro areas highlighted the differences, corresponding to the areas that were completely reconstructed and those with some original pieces, and the overlaps. analyzing piece by piece, we noticed how some elements have been replaced due to apparent deterioration, while in some other areas there are original pieces that are not featured in the wooden model. in 1932, the reconstruction phase of the attic, of some columns, of nearly the whole pediment, and of other areas of the arch, aimed to differentiate the new from the original, simplifying the details of mouldings, capitals and columns. this reconstructing strategy was very effective because it preserved the original parts, as shown by the laser data. parallel to the comparison for surfaces in the same reference system, we also compared the two surfaces considering the scale variation and applied the icp algorithm to the two geometries. this kind of comparison has been applied to the meshes of the facades using geomagic10 software. we also compared the points' grid by overlapping the two grids with surfer 9 software (fig.17). in this case, the data did not undergo procedures that could possibly introduce geometric errors. the outcomes of the two subsequent comparisons supported the geometric proximity of the two objects, even if the direct comparison of the surfaces raised some problems related to substantial gaps in the data, due to a lack of data in the maquette compared to the reconstruction. in the case we modified the mesh of the architectural structure, eliminating all the new parts that had no correspondence with the data of the comparison, in order to have the same surfaces. to find a comprehensive representation of the monument able to emphasize the differences between reconstructed and original parts, we adopted two different 3d modelling methods for the two parts. with the modelling software rhinoceros 4.0, we modelled the reconstructed parts, for example ________________________________________________________________________________ geoinformatics ctu fce 2011 344 pediment and attic, by extrusion and revolution, two traditional 3d cad modelling techniques. first, we drew the moulding profiles as well as the profiles of other architectonic details, and then we proceeded with the modelling of the surfaces. next, we selected, cut and eliminated from the facade's mesh the new areas. the remaining data corresponded to the mesh of the original part, generated starting from the cloud of points describing the historical parts. this representation of 3d models, generated with a double procedure, highlighted even more the simplified areas compared to the recovered ones (fig.18), and offered the possibility to interact directly on the digital product. figure 17. overlapping of the two grids in surfer and their subtraction in geomagic. figure 18. detail of the 3d modelled with the mesh of the historical parts 6. virtual reconstruction an additional analysis concerned the digital reconstruction based on the tridimensional survey of the original pieces of the arch that have not been used for its actual reconstruction. literature showed evidence of a piece of cornice of the attic found in 1960 during some digging operations and currently displayed inside castelvecchio's garden. this finding shows the wrong reconstruction of the top part of the monument. the digital anastylosis consisted of the creation of a 3d model and in its positioning on the complete 3d of the arch and its purpose are the reconstruction of the arch and the evaluation of the objects' compatibility. for the angle of the cornuce however, the method used has been more significant and illustrative. the attic has eight edges and the drawing of the piece that has been found matches four of them. to insert and relocate the piece, we performed a roto-translation in space with the same scale. the data have the same scale because they belong to the architectural structure and have been acquired using the same software, geomagic10. we uploaded the mesh of the whole facade in the orientation module, as well as the 3d for surfaces of the attic's angle. we kept the facade steady in order to have as a reference system the topographic support, ensuring its vertical positioning. we identified some homologous points, even if this operation presented some challenges because the reconstructed part has been quite simplified, while the piece found presents a rich moulding framework. then, we virtually positioned the piece in its original location. it is evident that its positioning has been approximate and that metric and historic controls were weak. however, this can be the first step toward the restoration of the monument. ________________________________________________________________________________ geoinformatics ctu fce 2011 345 successively, the 3d data have been imported onto rhinoceros 4.0, where we integrated modelling and mesh, to relocate it in its hypothetical original position (fig.19). once again, we want to stress the difference between the reconstructed ideal piece and the actual one that has been recovered. fig.19: arch reconstruction 7. conclusions and perspectives we can state that the methodology's and instrument's choices permitted us to obtain analyzable and comparable tridimensional data. also, software used allowed and, in certain cases, simplified modelling and comparison procedures. the choice to operate on the data grids as well as the use of a 3d mesh representation enabled us to work on the single surfaces, facade by facade, verifying not only the “vicinity” of the two objects according to macro areas, but also per single piece. the integration in the 3d for surfaces of modelled parts, corresponding to the areas that have been reconstructed, with the mesh obtained from laser data, corresponding to the original parts, allowed the emphasis on the current state of the arch, best illustrating how stylized the reconstructed part is. future perspectives might include the analysis of the whole arch and the use of 3d solid modelling, instead of 3d for surfaces, to operate block by block. in this case also, we would integrate the mesh, recreating a tridimensional database containing all the information needed to recognize and know each piece. 7. references [1] adami a., guerra f., vernier p., 2007 "laser scanner and architectural accuracy test" in proceeding of cipa 2007 xxi international symposium "anticipating the future of the cultural past" atene [2] adami a., gnesutta m., vernier p. 2010,dalla scansione laser al modello: il caso esemplare di san francesco della vigna, in architettura delle facciate: le chiese di palladio a venezia. nuovi rilievi, storia, materiali. marsilio editore [3] balletti c., guerra f., vernier p., studnicka n., riegl j., orlandini s., 2004. practical comparative evaluation of an integrated hybrid sensor based on photogrammetry and laser scanning for architectural representation, in isprs, international archives of photogrammetry and remote sensing, commission v, isprs xx congress, istanbul, turchia. [4] bitelli g., tini m.a., vittuari l., 2000. close-range photogrammetry, virtual reality and their intergration in archaeology. in proceedings of xixth congress of international archives of photogrammetry and remote sensing, pp 872-879, amsterdam [5] boehler w. 2003 comparing different laser scanners: features, resolution, accuracy. in cism, international centre for mechanical sciences, udine. [6] guidi g., beraldin j.a., 2004. acquisizione 3d e modellazione poligonale. dall’oggetto fisico al suo calco digitale, edizioni poli.design, milano. genetic algorithm in the computation of the camera external orientation rudolf urban, martin štroner department of special geodesy faculty of civil engineering, czech technical university in prague, thákurova 7, 166 00 prague 6, czech republic rudolf.urban@fsv.cvut.cz abstract the article addresses the solution of the external orientation of the camera by means of a generic algorithm which replaces complicated calculation models using the matrix inverse. the computation requires the knowledge of four control points in the spatial coordinate system and the image coordinate system. the computation procedure fits very well computer-based solutions thanks to it being very simple. keywords: photogrammetry, adjustment, algorithms, camera, observations, automation 1. introduction genetic algorithms simulating nature’s natural development as a subset of global optimization algorithms allow simple computations of optimum values based on a selected evaluation standard. the price for this simplicity, on the other hand, is very high computational demands. in more complex computations using the least square method the convergent iteration computation of a set of non-linear equations, for example, requires the determination of sufficiently accurate approximate values. in numerous geodetic problems, these may be determined with ease, e.g. in the area of geodetic network adjustment, but in other applications such as photogrammetry or laser scanning, or in geometric primitives fitting in particular, this is a rather complicated and frequently also unreliable option. together with their computational force supported by the latest computer technology, genetic algorithms theoretically offer algorithmically very simple methods allowing finding initial values for subsequent optimizations using the least square method. based on the knowledge of its mathematical nature, however, the whole problem must be simplified so that the solution time does not pose limitations to practical usability. the computation of the elements of the camera external orientation is a problem frequently solved in photogrammetry, there can be noted e.g. [1], [2], [3]. this problem with the availability of known approximate values and redundant numbers of measurements, it may be iteratively solved using the least square method. the computation of approximate values has recently been solved several times (see [4], [5]), here a very simple computational procedure using the genetic algorithm is described, which will apply in the computation of the first part of the determination of the external orientation elements, namely in the computation of the projection centre coordinates. the computational algorithm is complemented with a simple computation of the camera rotation angles. 2. genetic algorithms and their potential for use in geodetic computations genetic algorithms belong to global optimization techniques; they are based on a heuristic procedure applying the principles of evolution biology to solving complex problems. evolution geoinformatics fce ctu 9, 2012 5 urban, r., štroner, m.: genetic algorith in camera external orientation processes like inheritance, mutation, crossover and natural selection are simulated there. 2.1. principle of genetic algorithms the principle of how a genetic algorithm works is that it gradually generates generations of different solutions to a given problem. as these solutions (populations) undergo a development, the solutions improve. the solution is traditionally represented by binary numbers, but other expressions are also used, like fields, matrices etc. in the first generation, the population is composed of more solutions created usually by random generation. in the transition into a new generation, an evaluation is computed for each solution of the population using the evaluation (so-called fitness) function, which expresses the quality of the given solution and represents natural selection. according to this evaluation, suitable solutions are selected to be reproduced (copied) into the next generation and further modified by means of mutation (random change in part of the solution) and crossover (exchange of parts of the solution between individual solutions). thus, a new generation is generated and the solution is iteratively repeated while the population gradually improves, i.e. the quality (optimality) of the solution improves. the direction of selection and optimization are given by the evaluation function. the principles and applications of genetic algorithms and evolution procedures in general may also be found e.g. in [6], [7] or [8]. the advantage of using these algorithms is the fact that the generation of the following solutions is a very simple random process using the generator of pseudorandom numbers; there is no need for solving optimization calculations, you just need to define the evaluation function. 2.2. general flowchart of genetic algorithms the general process of the genetic algorithm’s functioning may be described in the following six points: 1. initiation: the zero population is generated, i.e. various random solutions are generated in the whole population numbers. 2. start of iteration • selection of high-quality individuals with the best evaluation. • by means of crossover, mutation and reproduction a new population is generated from selected individuals (generation procedure, besides, another procedure is used where only one new solution is generated to replace the worst solution in the previous generation). • the evaluation function for the new population is computed. 3. end of cycle: unless the end condition is fulfilled, the procedure again continues with point 2. 4. end of optimization: the solution with the best evaluation is the sought solution. the end condition may be specified by the computation time, the evaluation quality of the best solution (provided it may be defined), the total number of cycles or the total number of generations. geoinformatics fce ctu 9, 2012 6 urban, r., štroner, m.: genetic algorith in camera external orientation 2.3. advantages and disadvantages of genetic algorithms among the principal advantages of genetic algorithms there is the fact that no mathematical solution of the problem is necessary – what suffices is the knowledge of the evaluation function, if correctly set up the algorithms are resistant to finding only the local optimum and are very universal. the disadvantage is the problem of finding the “accurate” optimum solution; another limiting factor may be a large number of computations of the evaluation function (if it sets high demands for computing). yet another disadvantage is its excessive universality and resulting extreme computational demands; for this reason hybrid algorithms are frequently used instead where the knowledge of the solved problem is applied. 2.4. application to solutions of geodetic optimization problems optimization problems in the field of geodesy and cartography are usually solved using the least square method, i.e. the optimization problem [pvv] = vt · p · v = min, where vi are corrections of measured variables (i = 1...n) and pi is the measurement weight, v is the correction vector and p is the weight matrix. for non-linear systems of equations, the iteration solution (for more detail see [9]) is known in the form dx = ( at · p · a )−1 at · p · l ′ , (1) where a is the jacobian matrix (plan matrix) and l′ is the vector of reduced measurements. the corrections are determined from the formula v = a · dx + l′. (2) part of the solution is the matrix inverse of n = at · p·a where numerical problems occur during the solution if large numbers of measurements are solved (as discussed e.g. in [10] and applied in software in [11]). using the generic algorithm no inversion is necessary, only corrections are computed and, therefore, numerical problems are practically eliminated in this case. besides, the performance of the convergent iterative computation requires sufficiently accurate approximate values. in the case of not so extensive problems in common calculations in coordinate systems in geodesy or in smaller purpose-built networks this may be ensured by standard calculations of coordinates without any adjustment; in extensive solved systems like photogrammetric computations (see the solved problem) or in the area of 3d scanning where e.g. fitting of geometric primitives onto hundreds of thousands of points are solved, on the contrary, problems arise. provided they are appropriately applied, genetic algorithms may be one of the ways of how to obtain these approximate values and successively perform the final optimization using standard methods. another area of use is optimization according to a criterion which cannot be solved so simply as the least square method. an example may be l1-minimization used in geodesy mainly for detecting large measurement errors, i.e. the solution of a set of equations under the condition [|v|] = ∑ |v| = min., or minimization according to another standard selected by a mere change in the evaluation function. a fitting application of genetic algorithms that cannot be solved by any other means is for example the optimization of the geodetic network configuration (see [12]). it must, however, be pointed out that if a given problem lends itself to a satisfactory mathematical solution, in this case geoinformatics fce ctu 9, 2012 7 urban, r., štroner, m.: genetic algorith in camera external orientation figure 1: illustrative schema of used coordinate systems (in photogrammetry is general used the coordinate system, which z axis is oriented to zenith and y is oriented allong the normal of the image plane). the application of genetic algorithms is generally not beneficial as it poses extreme demands for computing. 3. calculation procedure of the camera projection centre position the first part of the calculation of the external orientation elements is the determination of the position of the camera projection centre. the underlying idea of the calculation is a tetrahedron (fig. 1), which is delimited by the entrance pupil p and three control points a, b, c. the symbol h refers to the principal point of the image. then, respective direction vectors of position vectors (a, b, c), the lengths of position vectors from the entrance pupil (ra,rb,rc) and the apex angles of position vectors (α,β,γ) in the calculated triangle are displayed. the distance of the entrance pupil from the principal point of the image is the camera constant (denoted in further calculations by symbol f). after the third dimension is added into the image coordinate system, the direction vectors of position vectors become known and are defined by the formula a = (xa ya f)t , (3) b = (xb yb f)t , (4) c = (xc yc f)t , (5) where xi, yi(i = a,b,c) are point coordinates in the image. before any calculations are made, point coordinates in the image must be reduced removing the coordinates of the principal point of the image and further corrected for the effect of lens distortion. the direction vectors of position vectors allow calculating the values of apex geoinformatics fce ctu 9, 2012 8 urban, r., štroner, m.: genetic algorith in camera external orientation angles of the position vectors using the formulae: cos α = (at · b) (|a| · |b|) , (6) cos β = (bt ·c) (|b| · |c|) , (7) cos γ = (ct ·a) (|c|·|a|) . (8) 3.1. calculation of one solution the basic idea is the solution of a tetrahedron which in general leads to maximally two different solutions of the entrance pupil position in the respective half-plane. to be able to quite accurately determine which solution is correct, another point must be added into the calculation [13] thus three more equations must be solved. therefore, six apex angles which may be calculated among four control points enter the calculation. 3.2. application of a genetic algorithm in terms of classification, a hybrid genetic algorithm is used here. the position of the entrance pupil is sought quite randomly. a random vector of a selected length is generated from the previous position. the length of the vector is decreased based on the evaluation criterion so that the optimum speed and optimum computation accuracy may be achieved. in this case, the zero generation point may conveniently be determined by graphic means located into the correct half-plane and situated at the previously estimated camera distance from the object so that the computation will not take too long. the initial point and the random vector serve for the generation of a new point (new generation), which is checked against the evaluation criterion. if the new point meets the criterion of higher quality than the previous generation, it is adopted as the current solution. the procedure continues by the generation of another random vector and all is repeated again. if the criterion fails to be of higher quality, the algorithm continues from the previous position. the length of the random vector is decreased, always after a selected number of generations which fail to meet the evaluation criterion. it is, therefore, appropriate to select a relatively large step at the start of the computation so that the computation may converge faster. in this way, the algorithm will gradually determine the correct solution of the entrance pupil position. so that the algorithm may function correctly, it is necessary to appropriately select the number of unsuitable generations for the modification of the vector’s length. the greater the number, the higher-quality the computation is, being however, also longer. the above-described algorithm may be ranked in the hybrid group. due to the absence of a greater population and crossover the algorithm is highly sensitive to local minima; in this case, however, it may be used as there is only one minimum within the solved half-plane being thus of a global type as can be seen in fig. 2 (the chart of contour lines was created for the algorithm testing data presented in tab. 1). the procedure is also called “hill climbing” and its description including further alternatives (e.g. “hill climbing“ with random restarts) less sensitive to the local minimum may be found in [8]. although the evaluation criterion checks in a simple way the approximation quality, nevertheless it cannot be interpreted in terms of the quality of the entrance pupil positioning. geoinformatics fce ctu 9, 2012 9 urban, r., štroner, m.: genetic algorith in camera external orientation figure 2: global minimum of the evaluation criterion in the plane of the correct solution therefore, the computation is terminated after the reduction of the random vector length below a certain level which thus indirectly determines the interval within which the correct solution is found. 3.3. evaluation criterion the principal step of the genetic algorithm consists in the specification of the criterion which will reveal whether the result is better or worse. for the computation of the external orientation, the criterion will depend on the magnitude of apex angles of position vectors. to be able to compare deviations from the correct solution, the angles calculated from coordinates must be subtracted from the correct angles from the image coordinates and the camera constant. these deviations have different signs and there are six of them in all. so that the evaluation criterion may be assessed with the best result, it is expedient to compare only one argument or one digit, doing so the resulting deviation may be characterized by the sum of individual squared deviations in the angle (which corresponds to the solution using the least square method). a criterion set up in this way will decide whether the new position of the entrance pupil is of higher quality than the preceding one. the algorithm will, therefore, always retain in its memory only two positions of the entrance pupil and two evaluation criteria, which it successively evaluates. 4. calculation of angles of rotation only three identical points are necessary for the calculation of angles of rotation. this calculation procedure is based on the publication by b. k. p. horn [15] and [16] and allows simple determination without iteration. geoinformatics fce ctu 9, 2012 10 urban, r., štroner, m.: genetic algorith in camera external orientation figure 3: schematic representation of three points in two coordinate systems and (according to [15]) 4.1. theoretical basis a schematic graphic representation of the problem is in fig. 3. the coordinates of three points in two coordinate systems (x1,x2,x3,x1,x2,x3) are known and the task is to find three angles of rotation (or the orthonormal matrix of rotation r). identical triples of points lend themselves to setting up two mutually equivalent triples of orthonormal vectors (unit, normal to each other) so that a pair of points are used for the calculation of a vector which is normalized, then by means of a third point and gram-smidt orthogonalization (described e.g. in [14]) a vector normal to the first one and via a vector product a third vector are created. the mutual relationship shared by these two triples of orthonormal vectors is only rotation in space, which may be simply determined. the calculation procedure of the triple of orthonormal vectors vx for x is (for vxthe procedure is identical, just with use of) v1 = x2 − x1, v2 = x3 − x1, (9) e1 = v1/||v1||, where ||v1|| = √ (vt1 � v1), (10) e2 = o2/||o2||, where o2 = v2 − (vt2 � e1) � e1, (11) e3 = e1 × e2, (12) vx = (e1 e2 e3). (13) since the following holds true r · vx = vx, (14) then it holds true for the calculation of r that: r · vx · v−1x = vx ·v −1 x , (15) hence r · e = vx · v−1x , (16) geoinformatics fce ctu 9, 2012 11 urban, r., štroner, m.: genetic algorith in camera external orientation figure 4: example of correct orientation of coordinate systems corresponding to formulas and hence r = vx · v−1x . (17) since all matrices in formula (18) are orthogonal, it also holds true for v−1x that v−1x = vtx hence r = vx · vtx . (18) 4.2. identical triples of points while calculating the angles of rotation the correct turning of the coordinate systems is of key importance (fig. 3). further, the coordinates of three control points in the image coordinate system must be recalculated into the space behind the entrance pupil. this recalculation may be performed using the entrance pupil p, the position vectors of control points (ra,rb,rc) and direction vectors of the position vectors (a, b, c), all modified so that they correspond to the correct turning of the coordinate systems. modified direction vectors according to fig. 4 a = (xa −x0, −(ya −y0), −f)t , (19) b = (xb −x0, −(yb −y0), −f)t , (20) c = (xc −x0, −(yc −y0), −f)t (21) where xi, yi are image coordinates of points and x0, y0 are image coordinates of the principal point in the image. the differences in the coordinates of control points behind the pupil are obtained by multiplying the unit direction vector (the vector marked with index 0 on the bottom right side) by the length of the position vector and by subsequent adding of the result to the coordinates of the entrance pupil p. thus, the matrix definition of point a looks as follows  x p a y pa zpa   =   xpyp zp   + ra · a. (22) geoinformatics fce ctu 9, 2012 12 urban, r., štroner, m.: genetic algorith in camera external orientation hence the input matrices of two mutually rotated systems before orthogonalization are x =   xa xb xcya yb yc za zb zc   , x =   x p a x p b x p c y pa y p b y p c zpa z p b z p c   . (23) 5. testing of the computational algorithm the algorithm was practically tested on more different cases; the most easily explainable one is presented here. four control points were selected on a calibration field with known spatial coordinates for the experiment (tab. 1). the used apparatus was the lumenera lu125c camera (camera constant f = 2445.8997, principal point in the image x0 = 677.1816,y0 = 504.3293). prior to the application of the computational algorithm the effects of radial and tangential distortion were removed from the set of image coordinates. after the computation of the complete external orientation the results were subjected to the iteration computation of the projection transformation to confirm whether the accuracy of the results would be sufficient for the iterative solution of the external orientation in bundle adjustment. the computations were performed on a desktop computer fitted with the intel pentium d processor (3.0 ghz) with 2gb ram in the scilab 5.0.3 programme. point no. x [m] y [m] z [m] x [pixel] y [pixel] 1 5001.22710 98.67664 997.50504 551.11 895.69 2 5001.55797 99.00517 997.49921 1129.16 371.59 3 5001.08773 99.15847 997.48235 338.45 74.27 4 5001.55615 99.10514 996.99086 980.61 179.56 table 1: coordinates of control points the testing of the computation was performed with the initial setup of the vector’s length as 0.5 m, the number of incorrect solutions of 15 for the reduction of the vector’s length into one half and with a tolerance for the iteration cycle termination with the reduction of the vector’s length to a value of 1e-15. different starts in the respective half-space were selected for testing. tab. 2 presents the deviations of the start from the correct solution in individual coordinate axes (dx, dy, dz) and, further, for illustration, also the spatial distance from the correct solution, the number of iterations and the total time necessary for the computation of the entrance pupil position. the resulting position of the entrance pupil from the iteration solution was identical for all tested points (x = 5001.198 m, y = 99.139 m, z = 998.924 m). the resulting position of the entrance pupil from the projection transformation adjustment was (x = 5001.199 m, y = 99.138 m, z = 998.925 m). the rotation matrix was nearly identical due to minimum differences in the position of the entrance pupil. hence the resulting rotation matrix is r =   0.9973281 −0.0332701 −0.06503720.0429059 0.9873119 0.1528864 0.0591255 −0.1552684 0.9861014   tab. 2 clearly shows that the computation speed depends on the distance from the correct solution (and also on the number of iterations). however, even in the case of a considerably geoinformatics fce ctu 9, 2012 13 urban, r., štroner, m.: genetic algorith in camera external orientation time [s] dx [m] dy [m] dz [m] distance [m] no. of iterations 0.238 -1.481 -2.491 0.631 2.966 489 0.141 -1.526 -2.410 2.629 3.879 288 0.189 -1.571 -2.329 4.626 5.413 381 0.153 -1.906 -0.539 0.542 2.054 306 0.126 -1.952 -0.458 2.540 3.236 255 0.135 -1.997 -0.377 4.538 4.972 279 0.191 -2.332 1.413 0.453 2.764 393 0.122 -2.377 1.494 2.451 3.727 223 0.162 -2.423 1.575 4.449 5.305 315 0.165 0.473 -2.064 0.658 2.217 315 0.199 0.428 -1.983 2.655 3.342 412 0.135 0.383 -1.902 4.653 5.041 280 0.134 0.047 -0.112 0.569 0.582 272 0.155 0.002 -0.031 2.567 2.567 321 0.461 -0.043 0.050 4.565 4.565 956 0.158 -0.379 1.840 0.480 1.939 324 0.133 -0.424 1.922 2.478 3.164 273 0.172 -0.469 2.003 4.476 4.926 356 0.425 2.427 -1.636 0.685 3.006 876 0.178 2.381 -1.555 2.682 3.910 367 0.131 2.337 -1.474 4.680 5.435 272 0.123 2.001 0.316 0.596 2.111 255 0.157 1.956 0.397 2.594 3.272 324 0.323 1.910 0.478 4.591 4.996 661 0.135 1.575 2.268 0.507 2.807 278 0.237 1.530 2.349 2.505 3.759 492 0.159 1.485 2.430 4.503 5.328 325 0.321 18.802 10.861 41.075 46.461 648 0.245 3.802 20.861 21.075 29.896 501 0.292 -1.199 -4.140 51.075 51.257 609 0.555 -21.198 -14.139 101.075 104.238 1135 table 2: results of testing random starting points geoinformatics fce ctu 9, 2012 14 urban, r., štroner, m.: genetic algorith in camera external orientation outlying start the computational algorithm reaches time well below 1 second, which may be seen in the last tested point in the table. 6. conclusion the article presents a non-traditional method of the computation of approximate values of elements of the camera external orientation from four control points which is very simple and, at the same time, highly efficient if computer technology is used. it applies the genetic method where there is no need for the calculation of the matrix inverse or any other complicated matrix calculations. it is solely based on a repetitive generation of a random vector with a length depending on the number of previous unsuccessful attempts. the method was tested on practical examples; the results are reliable and the time of their determination depends on the accuracy of the estimated initial position of the camera entrance pupil. the procedure converges from a practically randomly set initial position. acknowledgements the article was written with support from the grant of czech technical university in prague no. sgs12/051/ohk1/1t/11 “optimization of 3d data acquisition and processing for the needs of engineering geodesy”. references [1] grunert, j. a.: das pothenotische problem in erweiterter gestalt nebst über seine anwendungen in der geodäsie. grunerts archiv für mathematik und physik, band 1, 1841, pp. 238-248,(german). [2] haralick, r. lee, c. ottenberg, k. nolle, m.: review and analysis of solutions of the three point perspective pose estimation problem. intational journal of computer vision, 13, 3, 331-356, 1994. [3] lepetit, v. moreno-noguer, f. – fua, p.: epnp: an accurate o(n) solution to the pnp problem. international journal of computer vision 81(2): 155-166, 2009. [4] urban, r.: solution of the camera external orientation from four control points. proceedings of the juniorstav 2011 conference. brno: vysoké učení technické v brně, fakulta stavební, part 1, 376 p. 2011. isbn 978-80-214-4232-0. (in czech) [5] koska, b. pospíšil, j. obr, v.: eliminations of some defects of the digital cameras used in the laser scanning systems. in: ingeo 2008 – bratislava, 2008. isbn 978-80227-2971-0. [6] holland, j. h.: adaptation in natural and artificial systems, university of michigan press, ann arbor, 1975. [7] mitchell, m.: an introduction to genetic algorithms, mit press, cambridge, ma, 1996. [8] weise, t.: global optimization algorithms theory and application. electronic monograph, on-line available at http://www.it-weise.de/projects/book.pdf, 20.2.2012. geoinformatics fce ctu 9, 2012 15 urban, r., štroner, m.: genetic algorith in camera external orientation [9] böhm, j. radouch, v. hampacher, m.: theory of errors and adjustment calculus. geodetický a kartografický podnik praha, 2nd edition, prague, 1990. isbn 80-7011-0562. (in czech) [10] čepek, a. pytel, j.: a note on numerical solutions of least squares adjustment in gnu project gama in: interfacing geostatistics and gis. berlin: springer-verlag, 2009, pp. 179-193. isbn 978-3-540-33235-0. [11] čepek, a.: program gnu gama. http://www.gnu.org/software/gama/gama.cs.html. 14.4.2012. [12] berné, j. l. baselga, s.: first-order design of geodetic networks using the simulated annealing method. journal of geodesy, 78, springer-verlag, 2004. [13] kraus, k.: photogrammetry volume 2 advanced methods and applications. dümmler, bonn, germany, 4th edition, 1997. isbn 3-427-78694-3. [14] štroner, m. pospíšil, j.: terrestrial scanning systems. 1st edition. praha: česká technika nakladatelství čvut, 2008. 187 p. isbn 978-80-01-04141-3. (in czech) [15] horn, b. k. p.: closed-form solution of absolute orientation using unit quaternions. journal of the optical society a, 4, 629–642, 1987. [16] horn, b. k. p. hilden, h. m. negahdaripour, s.: closed-form solution of absolute orientation using orthonormal matrices. journal of the optical society of america a. vol. 5 issue 7, pp.1127-1135, 1988. geoinformatics fce ctu 9, 2012 16 ___________________________________________________________________________________________________________ geoinformatics ctu fce 241 the virtual 3d reconstruction of the east pediment of the temple of zeus at olympia – presentation of an interactive cd-rom andrás patay-horváth university eötvös loránd, institute for ancient history archaeological institute of the hungarian academy of sciences, budapest, hungary pathorv@gmail.com keywords: archaeology, ancient greek marble sculpture, 3d scanning, virtual modeling abstract: the paper gives an overview of a two-years project concerning a major monument of ancient greek art and presents the interactive, bilingual (english/hungarian) cd-rom, which is intended to summarize and visualize its final results. the presented project approaches a century-old controversy in a new way by producing a virtual 3d reconstruction of a monumental marble group. digital models of the statues were produced by scanning the original fragments and by reconstructing them virtually. the virtual model of the pediment surrounding the sculptures was prepared on the basis of the latest architectural studies and afterwards the reconstructed models were inserted in this frame, in order to test the technical feasibility and aesthetic effects the four possible arrangements. the resulting models enable easy and very instructive experimentation, which would be otherwise impossible with the originals and/or very expensive and not very practicable with traditional tools (e.g. real-size plaster models). the complete model can effectively be used to verify the results of earlier or more recent reconstructions presented only in simple drawings. in addition, the 3d models of the individual fragments can be used for further research and for visualization. the documentary cd-rom presenting the full background, the methods and the conclusions of the project contains beside a comprehensive text various kinds of supporting documents (images, 3d models, papers, broadcasts, audiovisual material). it is addressed to a mixed audience: a picture gallery, a short documentary movie some other attachments including a selected bibliography is intended for the general public, but scholarly publications, presentations on related problems are also included for specialists interested in certain details. 1. introduction 1.1 the subject the temple of zeus at olympia was built in the first half of the 5th century b.c. (ca. 475–455). its sculptural decoration consists of two pediments and twelve metopes. given the large size of the building itself, the sculptures were all well over life-size and were made of white parian marble. most of them are quite well preserved and are depicted in practically every handbook on greek art or on ancient art in general. the sculptures of the temple in general and the fragments of the east pediment (figure 1) in particular have been thoroughly studied since their discovery in the 1řř0‟s, but they still pose some important questions, as indicated by the growing number of monographs and scholarly articles related to them [1, 2, 3, 4, 5, 6, 7]. the most recent debate has started with a series of publications by the author [8, 9, 10, 11] and concerns the interpretation of the east pediment, which involves the problematic issue of the correct reconstruction of this group as well. figure 1: fragments of the east pediment, as displayed in the archaeological museum of olympia today ___________________________________________________________________________________________________________ geoinformatics ctu fce 242 1.2 the problem the arrangement of the five central figures of the east pediment has been the subject of scholarly debates since the discovery of the fragments more than a century ago [5, 11]. the basic problem is that the fragments themselves can be arranged in four substantially different ways and there are no obvious clues for choosing the most probable one. there is a fairly detailed description of the group by pausanias, who saw it in the 2nd cent. ad, but his text (description of greece, book v, ch. 10, 6-7) is not conclusive regarding the precise arrangement of the figures (he does not specify how to understand his indications „to the left” and „to the right” of the central figure). the find places are not unequivocal either, since the pieces were scattered around the temple by an earthquake in the 6th cent. ad and the fragments were subsequently reused in medieval buildings. in sum, there are four different arrangements, all of which have already been advocated by certain scholars for various aesthetic, technical and other considerations. most often the reconstructions are presented in simple drawings, ignoring the three-dimensional form of the statues (figure 2). figure 2. the central part of the pediment enlarged. schematic reconstruction drawings showing every conceivable arrangement of the five central figures. different colours highlight the differences of the four versions. after herrmann 1972. 1.3 brief history of research since the original fragments are insufficient to answer the question and their enormous size and weight make experimentation practically impossible, scholars had to approach the problem in a different way. at the end of the 19th century, plaster models of the statues were produced first on a reduced scale (1:10), then on the act ual scale (1:1) and lost body parts, arms, etc. were reconstructed as well. experimenting with the plaster models for several years, g. treu the archaeologist, who published the sculptures of olympia, claimed in 1897 that one of the four conceivable arrangements (open "a": k – g – h – i – f) is physically impossible, because the left hand of figure k and the spear in the right hand of g do not fit but run across each other in the limited space [1]. to support this rather strong argument, treu added that with the help of the plaster models, anyone can verify his statement. indeed, during the following decades, several archaeologists exploited the possibility and experimented with the life-size models: they concluded that the reconstruction proposed by treu had to be modified at some major points, yet none of them advocated the option excluded by him [12, 13]. the large plaster models (kept in dresden) were not used for experimentation after the world war ii; in fact their sheer existence fell into oblivion. (it is a something of a miracle that they survived the notorious demolition caused by the bombings of the city.) most scholars used either the reduced models or just simple drawings to propose new reconstructions. besides a great number of studies, a complete monograph was also published on the east pediment in 1970, but no-one was able to present a fully satisfactory and convincing reconstruction. it is open closed "a" "b" "a" "b" ___________________________________________________________________________________________________________ geoinformatics ctu fce 243 characteristic of the situation that a pair of renowned english-greek authors presented two completely different reconstructions side by side in the same volume on the sculpture of the temple [2]. there was a major methodological problem as well. in general, scholars were accustomed to discuss the reconstruction and the interpretation together and the reconstruction was normally adapted to the interpretation, which is logically the wrong way, of course; evidence, which could be used to establish the correct reconstruction independently from the interpretation, was usually neglected. after a while it seemed that all conceivable arguments have been formulated and no approach proved to be entirely viable, thus archaeologists grew tired of a seemingly unproductive debate and gradually agreed (during the 1970s and 1980s) on a reconstruction, which was proposed by a few authoritative scholars supporting their notion by some theoretical considerations of supposed universal validity [3,5,6]. thus an absurd situation emerged: today the most widely accepted reconstruction (figure 3) is precisely the one, which was deemed technically impossible by treu. obviously, this would not present a problem, if his results had been thoroughly tested and clearly refuted, i.e. if anyone had showed that treu had experimented with ill-restored models or had come to wrong conclusions for some other reason. instead, everyone (with honorable exceptions) has ignored his arguments and his results. apparently nobody realized that the best evidence for the benefit of experimenting with life-size models is provided by g. treu himself, who had advocated the arrangement widely accepted today, while he only had the miniature models at his disposal, but later his experiences with the life-size models made him change his mind [1]. figure 3. the most commonly accepted reconstruction (open arrangement "a") of the pediment (after herrmann 1972 fig. 95) 2. the project in order to avoid the methodological pitfalls of previous approaches, the present project focused exclusively on the problem of the reconstruction, and did not build upon sources, results, and hypotheses concerning the interpretation of the pediment. it relies exclusively on the following types of evidence, which are totally independent from the interpretation: (1) the size of the sculptures and the elaboration of certain details, which provide a clue about their position in the pediment (optical corrections); (2) the architectural framework of the group (primary context); (3) the position of the excavated fragments at the site (secondary context). the directions indicated by pausanias (which are also independent of the interpretation) are not discussed here, because this is mainly a philological problem and has already been treated by the author in detail elsewhere [11]. the basic idea of the project consisted in the assumption that 3d scanning and modeling might solve the problem of the arrangement of the central figures of the east pediment of the temple of zeus at olympia. instead of the expensive and troublesome experimentation with plaster casts and models, highly accurate virtual 3d models of the statues can be produced by scanning the extant fragments in 3d and then modeling the missing parts virtually. inserted in the virtual model of the pediment, these 3d models can be easily used to test the technical feasibility and aesthetic effects of the different reconstructions. this seemingly simple notion was not easy to implement. high resolution 3d scanning can be readily used to create an accurate, undistorted documentation of geometric shapes and surfaces of relatively small size, but the scanning of huge marble sculptures such as the fragments of the olympian pediments is an especially complicated task and presented a great technical challenge. practical difficulties of various kinds were experienced during the data capture [14, 15] and the virtual modeling was also complicated. several software application had to be tested for the effective virtual reconstruction, thus active cooperation with the software developers to find the most appropriate solutions was inevitable. the plan was, however, carried out successfully and the virtual 3d reconstruction of the entire pediment was completed by january 2011. (figure 4) since then, the completed model can effectively be used for experimentation with the different arrangements and yielded unexpected results, which were already presented at an international level.[16] further possibilities to exploit the scanned data and the models (both for scholarly and for educational purposes) are plentiful. the 3d models of the individual fragments can be used for e.g. to visualize the reconstruction of the lost metal attachments of the statues, or they can be inserted in a virtual 3d model of the entire temple. ___________________________________________________________________________________________________________ geoinformatics ctu fce 244 3. the interactive cd-rom (isbn 978-963-284-196-0) 3.1 objectives during the course of the project reports were regularly presented on various meetings and international congresses and the results were published in due course [14, 15, 16], but all these publications (both digital and printed media) were restricted to 2d format and did not enable visualization in 3d. an appropriate documentation in the present case can, however, be conceived only in 3d and the most convenient solution seemed to be the publication of an interactive, multimedia cd-rom. our goal was to present the 3d models in a fairly good resolution and in a way, which enables the user to manipulate (to rotate, to zoom, to move) them in a relatively easy and uncomplicated fashion, without the need to purchase costly software products (and to learn, how to use them). at the same time, to preserve intellectual property rights, we did not want to disclose the original 3d data captured or created during the project. (they can be obtained on request – mainly for scientific purposes with no commercial implications – from the author, if both the german archaeological institute and the greek authorities agree.). since the project is a multidisciplinary one making use of the latest technological innovations and concentrating on a very specific and complex archaeological problem, it seemed to be reasonable to envisage a mixed audience consisting of both classical archaeologists / students of art history and computer scientists / experts in multimedia visualization. the inclusion of at least some pieces of basic information for both groups was deemed to be essential. because the monument investigated during the project, the temple of zeus and its sculptures are very well-known and famous pieces of the european cultural heritage (the site itself belonging to the unesco world heritage), it was intended to present the project and the models at different levels, not only for specialists, but also for the interested general public. figure 4. the new virtual reconstruction (closed arrangement "a") of the complete pediment 3.2 structure and content our aim was to create a clear and logical structure enabling easy orientation and navigation for every interested party. we chose therefore a format, which combines the appearance of a traditional printed publication with the extended functions of a website. by inserting the cd-rom into the computer (pc or mac), the user is automatically confronted with a screen, which functions like an ordinary website with an animated flash intro and a dynamic, multi-level menu (table of contents) on the left. the content itself is structured in fact like that of a book and the appearance resembles that of a printed book as well (all pages numbered consecutively and having clearly defined dimensions and a constant layout fitting the screen). the pages cannot be scrolled down, but there are arrows on the left and on the right of each, to turn over to the following or to the previous one. in addition there is a navigation bar on top of each page, directly below the title. by clicking on this, a complete scrollable list of all pages (with their individual titles) appears on the screen and the user can easily move to any other page, he is interested in. (figure 5). the text contains links to attached documents of various kinds (e.g. publications in pdf, reports in mp3 and avi format) and to other pages of the book guiding or informing the user, like cross-references and footnotes of a traditional book. images and 3d models displayed on the pages can be enlarged and viewed in a separate window by clicking on them. in order to ensure wide and easy usability, 3d models were included in 3d pdf format. this enables the user to observe the models from any ___________________________________________________________________________________________________________ geoinformatics ctu fce 245 point of view and to enlarge any part of them, but the original 3d data sets are not disclosed [17]. the fragments of each figure have been generally designated by alphabetic letters since their original publication by g. treu in 1897 [1] and precisely because their arrangement in the pediment is disputed, they were arranged in alphabetical order, one figure per page. navigation between them is facilitated for the non-specialists by a page show miniature icons of the models and the commonly used designations of the figures, both functioning as a direct link to the page, where the models of that particular figure are displayed. on these pages, the model on the left shows the surface of the preserved torso as recorded by the 3d scanner, the one in the centre displays a closed digital model of the piece, whereas each one on the right presents the whole figure as completed during the project, the original parts displayed in grey, the completed ones in pale blue. (figure 5) textures taken from the present state of the fragments were not applied to the models, because they are irrelevant for the project and because they are generally misleading, since ancient marbles were originally colored in general, and in this case practically every trace of polychromy has completely disappeared [18]. figure 5. two pages of the cd-rom illustrating its main features (structure, navigation, 3d models of individual figures) ___________________________________________________________________________________________________________ geoinformatics ctu fce 246 the four different virtual 3d reconstructions of the central part of the pediment are displayed in a similar way (the original and the completed parts differentiated by the same colors and with a navigation aid showing all variants side by side). two pages are devoted to every single arrangement showing the model from three different but constant viewpoints (all of them on the main axis of the pediment): 1. “museum view” (viewer standing approximately on the same level as the statues); 2. “ancient view” (viewer standing approximately on the ancient ground level before the temple); 3. “aerial view” (from above, pediment frame removed from above the statues). in addition, by clicking on the museum view, each possible arrangement of the central group can be viewed and manipulated in 3d pdf format. with the help of these models, everyone can decide which option seems most or least satisfying technically and aesthetically. the most probable reconstruction of the entire pediment (according to the author) is also included and can be studied in 3d pdf. figure 6. two pages of the cd-rom illustrating the presentation of the central group ___________________________________________________________________________________________________________ geoinformatics ctu fce 247 texts, presentations and audio-recordings of lectures, interviews of various genres are displayed in unaltered form (each one of them in the original language, i.e. english, german, hungarian or french). the differences are due to the various types of audiences (specialists or general public) and reflect at the same time the progress of the research. published and forthcoming manuscripts of the author are also included in the appropriate sections. numerous photographs of each figure are also added in the gallery section and may thus be compared with the 3d models. the aesthetic value of these images cannot be denied, but at the same time, they clearly show the limitations of this kind of documentation. 3.3 comparison with similar projects there are two distinct groups of projects, which invite comparison with the present one. (1) during the last decade, several virtual 3d reconstructions of the sanctuary and of the temple of zeus have been produced. these recreations (powerhouse museum, sydney 2000 and foundation of the hellenic world, athens 2004) were in fact motivated by the growing interest in the olympic games and they were thus fundamentally different from the present project regarding their aims, methods and results as well. the attachments in the annex section are intended to give a quick overview of them. (2) there were, on the other hand, a few notable projects involving 3d scanning and visualization of ancient sculpture, which can be more readily compared with the present one, although they were concerned with other monuments. these projects are mentioned and illustrated in the introduction of the cd-rom, because they had a decisive impact on the present project. the most recent one was the trier constantine project (arctron ltd., 2007), which involved both 3d scanning and virtual 3d reconstruction [19] and thus provided the basic idea for the author. the earlier one, (“metopes of selinunte” by siba, lecce – nrc, ottawa, 2004), which involved only the scanning and visualization of greek sculpture (but actually of the sculptural decoration of a monumental greek temple, like the one at olympia), served as a model for the cd-rom. [20] despite the similarities of all these projects, the cd/dvd presentations of them became very different in many respects. the constantine project was advertized only on a dvd by a 12-minutes movie illustrating the workflow and containing some very impressive 3d renderings and animations. the production of such documentation was beyond the means of the present project and would also have been insufficient to convey its results appropriately. the selinunte cd used macromedia director and contains almost exclusively audiovisual material (whereas in our case the material was mainly presented in written form), but its basic structure could be adapted. our renderings and animations are (mainly for financial reasons) clearly less elaborated and the design of the cd is much less sophisticated than the “metopes of selinunte”, but perhaps the structure is clearer and the navigation easier. the main difference and the progress can be observed in the rendering of the 3d models, since the 3d pdf format enables a manipulation practically free of any constraints (as opposed to the quick time viewer used on selinunte cd). the other differences derive mainly from the different aims of the two projects: the selinunte cd focuses on technology using the archaeological material as an example without discussing it in detail, whereas the cd presented here focuses on an archaeological problem using 3d scanning technology as a tool to solve it. 4. conclusions the complete virtual 3d reconstruction of the composition leads to the conclusion that the reconstruction, which is most widely accepted today (open “a”), is technically the most difficult to realize and that both open arrangements would be feasible only if we ignored a general pictorial convention of ancient greek art. still, it is important to emphasize that the virtual reconstruction does not enable us to establish the right arrangement, i.e. the one actually realized in antiquity, but only to exclude (with a high degree of probability) two of the four options. however, considering the uncertainties experienced so far, this result can be regarded as a great progress. though the remaining two closed arrangements are possible both technically and iconographically, one can observe, that every piece of evidence, which is independent from the interpretation actually point to type “a”, which can be considered therefore as the most probable reconstruction. the project reached therefore its major goal and contributed significantly to a debate, which engaged archaeological research for more than a century. it demonstrated at the same time, that 3d scanning can be used not merely for documentation (as it is most frequently employed), but for effective research purposes as well. 5. acknowledgements the scanning campaign was carried out with the permission of the 7th ephorate of prehistoric and classical antiquities in greece and in close collaboration with the german archaeological institute at athens, conducting the excavations on the site for more than 125 years. special thanks are due to g. hatzi (head of the ephorate at olympia) and r. senff (archaeological supervisor of the olympia excavations). financial support for the project was provided by the hungarian national research fund (otka ref. no. nnf 85614) and the jános bolyai scholarship offered by the hungarian academy of sciences. ___________________________________________________________________________________________________________ geoinformatics ctu fce 248 6. references [1] treu, g.: olympia iii. bildwerke aus stein und thon. berlin, 1897. [2] ashmole, b. – yalouris, n.: olympia. the sculptures of the temple of zeus. london: phaidon, 1967. [3] simon, e.: zu den giebeln des zeustempels von olympia, mitteilungen des deutschen archäologischen instituts, athenische abteilung 83 (1968), 147-167. [4] säflund, m. l.: the east pediment of the temple of zeus at olympia. a reconstruction and interpretation of its composition. göteborg, 1ř70. [5] herrmann, h.-v. (ed.): die olympia skulpturen. darmstadt: wissenschaftliche buchgesellschaft, 1987. [6] kyrieleis, h.: zeus and pelops in the east pediment of the temple of zeus at olympia , in: buitron-oliver, d. (ed.) the interpretation of architectural sculpture in greece and rome. washington: national gallery of art, 1997, 12-27. [7] rehak, p. – younger, j. g.: technical observations on the sculptures from the temple of zeus at olympia , hesperia 78 (2009), 41-105. [8] patay-horváth, a.: pausanias und der ostgiebel des zeustempels von olympia , acta antiqua acad. hung. 44 (2004), 21-33. [9] patay-horváth, a.: die frisur der weiblichen protagonisten im ostgiebel des zeustempels von olympia , in: ganschow, th., steinhart, m. (eds.): otium. festschrift für volker michael strocka. remshalden: greiner, 2005, 275283. [10] patay-horváth, a.: the armor of pelops, in: mattusch, c. c., donohue, a. a. , brauer, a. (eds.): common ground. archaeology, art, science and humanities. proceedings of the xvith inter national congress of classical archaeology boston. oxford: oxbow, 2006, 424-427. [11] patay-horváth, a.: zur rekonstruktion und interpretation des ostgiebels des zeustempels von olympia , mitteilungen des deutschen archäologischen instituts, athenische abteilung 122 (2008), 161-206. [12] studniczka, f.: die ostgiebelgruppe vom zeustempel in olympia, abhandlungen der sächsischen akademie der wissenschaft, phil.-hist. klasse 37 (1923), 3-36. [13] bulle, h.: der ostgiebel des zeustempels zu olympia, jahrbuch des deutschen archäologischen instituts 54 (1939), 137–218. [14] patay-horváth, a.: virtual 3d reconstruction of the east pediment of the temple of zeus at olympia – a preliminary report, archeometriai műhely / archaeometry workshop 7/1 (2010), 1ř-26. [15] patay-horváth, a.: virtual 3d reconstruction of the east pediment of the temple of zeus at olympia – a preliminary report, proceedings of the 14th international congress “cultural heritage and new technologies” 200ř, wien, 2011, 653-658. [16] patay-horváth, a.: the complete virtual 3d reconstruction of the east pediment of the temple of zeus at olympia, http://www.isprs.org/proceedings/xxxviii/5-w16/pdf/patay.pdf, 2011-05-30. [17] pletinckx, d.: europeana and 3d, http://www.isprs.org/proceedings/xxxviii/5-w16/pdf/pletinckx.pdf, 2011-0530. [18] brinkmann, v. – primavesi, o.: die polychromie der archaischen und frühklassischen skulptur, münchen: biering und brinkmann, 2003. [19] parisi presicce, c.: konstantin als iuppiter, in: demandt, a. – engemann. j. (eds.): konstantin der grosse. imperator caesar flavius constantinus., darmstadt, 2007, 117-131. [20] beraldin, j.-a. (et al.): the metopes of selinunte: presentation of the interactive cd-rom, xxi international cipa symposium, http://cipa.icomos.org/text%20files/athens/fp025a.pdf, 2011-05-30. http://www.isprs.org/proceedings/xxxviii/5-w16/pdf/patay.pdf http://www.isprs.org/proceedings/xxxviii/5-w16/pdf/pletinckx.pdf http://www.db.dyabola.de/dya/dya_srv2.dll?07&dir=ktscwviq&recno=292033 http://www.db.dyabola.de/dya/dya_srv2.dll?07&dir=ktscwviq&recno=759918 http://www.db.dyabola.de/dya/dya_srv2.dll?07&dir=ktscwviq&recno=759918 http://www.db.dyabola.de/dya/dya_srv2.dll?07&dir=ktscwviq&recno=759918 http://www.db.dyabola.de/dya/dya_srv2.dll?07&dir=ktscwviq&recno=759918 http://www.db.dyabola.de/dya/dya_srv2.dll?07&dir=ktscwviq&recno=759918 http://cipa.icomos.org/text%20files/athens/fp025a.pdf ________________________________________________________________________________ geoinformatics ctu fce 2011 125 a semiautomatic large-scale detection of simple geometric primitives for detecting structural defects from range-based information r.martínez1, f.j. delgado1, a. hurtado2, j. martínez3 and j. finat4 1: mobivap r&d group, scientific park, university of valladolid, 47011 valladolid, spain, {ruben.martinez.garcia, franciscojavier.delgado}@uva.es 2: metaemotion, scientific park, r&d building, 47011 valladolid, spain, ahurtado@metaemotion.es 3: lab of architectural photogrammetry, ets arquitectura, university of valladolid, 47014 valladolid, lfa@ega.uva.es 4: dept. of algebra and geometry, ets ingeniería informática, university of valladolid, 47011 valladolid, jfinat@agt.uva.es keywords: architectural surveying, semi-automatic recognition, defects detection, intervention assessment. abstract: buildings in cultural heritage environments exhibit some common structural defects in elements which can be recognized by their differences with respect to the ideal geometric model. the global approach consists of detecting misalignments between elements corresponding to sections perpendicular to an axis, e.g. the local approach consists of detecting lack of verticality or meaningful differences (facades or internal walls) in curved elements with typical components (apses or vaults, e.g.) appearing in indoor environments. geometric aspects concern to the basic model which supports successive layers corresponding to materials analysis and mechanical structural behaviour. a common strategy for detecting simple shapes consists of constructing maps of normal which can be extracted by an appropriate sampling of unit normal vectors linked to a points cloud. the most difficult issue concerns to the sampling process. a profusion of decorative details or even the small variations corresponding to small columns which are prolonging the nerves of vaults generate a dispersion of data which can be solved in a manual way by removing notrelevant zones for structural analysis. this method can be appropriate for small churches with a low number of vaults, but it appears as tedious when we are trying to analyse a large cathedral or an urban district. to tackle this problem different strategies for sampling information are designed, where some of them involving geometric aspects have been implemented. we illustrate our approach with several examples concerning to outdoor urban districts and indoor structural elements which display different kinds of pathologies. 1. introduction there exists a large diversity of problems in cultural heritage buildings which justify their maintenance or rehabilitation. they concern to unexpected behaviour of structural elements related with different kinds of mechanical efforts such as compression, traction and torsion between vertical elements, and material deterioration. the heterogeneity inside of walls, humidity or deterioration in materials can generate structural defects, but they can be detected by direct visual inspection or, alternately, by using acoustic devices for identifying irregular or heterogeneous internal composition of walls. in a similar way another affections can be identified such as geological issues (landslides, earthquakes, e.g.), fluencies or even dynamical actions (forces and moments arising from vibrations, e.g.). all of them are meaningful and involve the whole building structure but this work is restricted to quasi-static mechanical efforts, while an integral restoration must have into account the existing materials. three-dimensional modelling is crucial for supporting all information concerning the building. in our case, a general 3d representation is created from range data which is captured by means of different kind of laser scan devices and matched with standard software tools. alternately, 3d models can be generated from a collection of images with a good performance and less cost. a first goal concerning to cultural heritage documentation is to detect any kind of displacements or deformations on the fabric w.r.t. the expected shape, without intending to explain their causes from mechanical or materials composition. a second goal concerns to the insertion of such pathologies inside an information system which is overlaid to a 3d model of the whole building. a third goal would be focused towards the development of a management system in charge of monitoring and assessing possible interventions focused to prevent failures or possible crashes. an intelligent agent in ________________________________________________________________________________ geoinformatics ctu fce 2011 126 charge of assisting management would be on the last step of the system for supporting experts in the take of decisions. we only focus in the documentation and information system which are based on the common reference given by the 3d model generated from image and range information. the geometric inputs given by a dense point cloud with xyz coordinates allow superimposing different kinds of geometric primitives such as planes, spheres, cylinders, cones, torus or wired structures. the semi-automatic identification of structural elements (columns, walls, vaults, e.g.) is performed from an interactive application which allows clipping and evaluation w.r.t. ideal shapes. the ideal character of primitives is very useful for a global understanding of the static behaviour of the whole building, but it is necessary to complement it with an analysis or evaluation of the current fabric status. this approach is related to traditional reverse engineering or surface compression methods [1] which propose clustering techniques for automatic surface extraction without measuring the distance w.r.t. ideal shapes. due to the lack of information of occluded parts and the irregular distribution of captured data, different densities can be incompatible with traditional tools for estimating the current state of the building (based in finite element method). it is necessary to work at different levels of detail (lod) and to perform different kinds of sampling [2]. we have developed a software tool allowing the automatic extraction of meaningful elements depending on several importance criteria. our approach for automatic sampling is based on ransac methodologies (random sample consensus) [3] for generating and extracting valid samples for further propagating and validating hypotheses. the extension includes aspects involving impsac (importance sampling consensus) [4] and prosac (progressive sample consensus) strategies [5]. impsac is a natural extension of ransac approach which is well-known in computer vision [6, 7]. in this work we have developed an impsac approach applied to geometry, where importance functions for selecting samples are linked to geometric properties (depth for points, relative orientation for triangles) and radiometric parameters (homogeneous colour distributions for materials). prosac introduces more advanced strategies for making compatible the information to different lod for curved elements. indeed, even after a manual selection of critical zones, the presence of columns or nerves in vaults gives a lot of outliers. hence, a good selection of samples in a low-resolution level improves the behaviour of incidence properties of the normal vectors used for measuring model similarity. the rest of the paper is structured as follows. section 2 introduces the traditional approach for structural analysis and a classification of the different types of structural defects. section 3 explains how to achieve the adjustment of range data to a geometric primitive represented only by a parameterized distribution of its perfect shape. section 4 extends the previous method for achieving a smart sampling of the points cloud, trying to get a density of points directly proportional to the complexity of the represented object shape. finally, section 5 concludes the paper and gives some clues for future improvements. 2. common framework for structural analysis a global 3d geometric model of cultural heritage building can be obtained by merging dense clouds of points arising from laser scan devices taken from different location. uvacad [8] is a specific software solution for merging range and radiometric information from different sources. dense information is needed for visualization but it is not useful for a structural computation viewpoint. since structural analysis uses a sharp simplification based in mechanisms of bars and joints, it is necessary to simplify the geometry in a collection of lines associated to longitudinal and transversal sections. indeed, structural aspects are usually computed in terms of finite element methods (fem), whereas materials analysis requires a large set of techniques, using at most five thousand control points and bars connecting them. despite of this fact, the bad conservation state makes sometimes very difficult to implement an effective intervention even when structural defects are obvious. figure 1: 1) lack of verticality in walls of chartres of aniago (valladolid, spain). 2) dangerous vaults in aniago. 3) external walls in a spanish castle (trigueros del valle, valladolid) ________________________________________________________________________________ geoinformatics ctu fce 2011 127 we shall focus on geometric properties involving global aspects, by forgetting decorative or non-functional elements which have a very high value from the cultural heritage viewpoint. from the structural element viewpoint, there are some important issues concerning to identification and evaluation of parallelism and perpendicularity constraints in walls or aligned elements. for example, in [9] the superposition of clipped contours allows to identify misali gnments and irregularities in the distribution of efforts very close to the maximal tensions supported by the building. however, this paper will be focused on detecting meaningful elements for structural analysis in the fabrics. multiresolution models are necessary to patch together local factors and global behaviour of the whole building. synergistic combinations of both explain the persistence of fabrics, and inversely the local damages in some components would explain some global crash phenomena. this interrelation between local and global aspects is transversal to the above described modelling tools arising from mathematical, physical and mechanical (mpm, in the successive) aspects which provide models and technologies to be applied. basic approaches for getting multiresolution models by sampling are based on blind sampling, i.e. decimation techniques. it removes important data relative to joints, fissures or cracks whose detection and analysis are crucial to detect structural problems and avoid a generalized crash. hence, it is necessary to develop smart strategies for sampling which include different kinds of mpm properties and characteristics: a) geometric modelling involves possible irregular distribution of components depending on the distance, the relative orientation and the reflectance capability of scanned materials; the last one can be corrected by superimposing high resolution views, but the other ones require automatic procedures that apply inversely proportional weight to the far or skew information w.r.t. the nearest of frontal objects with a higher density in the point cloud. b) physical-chemical analysis concerns to different properties of materials (related to internal composition, density, porosity, absorption or thermal parameters, e.g.) with a different behaviour depending on environmental conditions. c) mechanical properties involve different kinds of quasi-static efforts (compression, traction, torsion, e.g.), dynamical aspects (arising from vibrations or additional tensions, e.g.), the answer of the fabrics to such efforts (involving different ways of measuring elasticity, e.g.) and irregularities in the distribution of materials (masonry versus stone in walls, e.g.). the integration of these aspects requires specific experiments for evaluating structural and materials components. furthermore, their in-situ evaluation requires a visual inspection of the state of joints (including materials decay, unsteady states) or bending effects in the elements which are supporting stairs, vaults or different kinds of roofs and/or internal coverings. 3. fitting geometric primitives to real data the traditional representations in architecture are based on drawing polylines using computer aided design (cad) software. here, 3d points can be grouped in piecewise linear (pl) models linked to triangulations in which the detection of discontinuities up to a threshold of normal unit vectors allows edge detection. for instance, the intersection of dominant planes gives edges which very often are hidden by decorative elements. in projective geometry, sections and projections are the main inputs for a structural analysis based on 1d elements; so, deviations w.r.t. parallelism or perpendicularity constraints can be detected in a semi-automatic way. however, this approach does not solve troubles such as the detection of deviations (w.r.t. parallelism or perpendicularity) or torsion effects in superficial elements, which are not visible in planar representations. this problem requires the fitting of volumetric primitives to 3d data. figure 2: polylines for arcs and profiles are manually drawn over cad files arising from points clouds. ________________________________________________________________________________ geoinformatics ctu fce 2011 128 the fitting of geometric primitives to a points cloud requires visual inspection to identify which kind of geometric primitive is the most appropriate: dominant planes for façades, cylinders for most columns and spheres or cylinders for vaults nerves or columns. the automation requires a volumetric segmentation able of identifying geometric primitives and fine details following a coarse-to-fine strategy. coarse analysis corresponds to the detection of simple volumetric primitives (dominant planes, cylinders, e.g.) which are grouped in architectural objects (towers, urban blocks) according to order and adjacency constraints as in [10]. hence, a volumetric segmentation adapted to the architectural object and its environment is obtained. figure 3: lack of verticality and torsional buckling of walls is one of the most common defects which can be evaluated by computing the difference with respect to ideal geometric primitives. we have developed algorithms for volumetric primitive fitting, and we are working on the automatic detection and maximal prolongation of thin 3d polylines. from the structural viewpoint both types of data are necessary. indeed, in correspondence with joints, bars and walls, zero-, oneand two-dimensional elements must be treated in a different way according to their structural role. deterioration at joints is transmitted to columns and nerves which must be surve yed too. the lack of symmetry in the distribution of geometric elements can generate structural defects in the fabric. for example, sometimes a wall at one side plays the same role as a collection of columns on the parallel side, with a quite different distribution of efforts. the most difficult issues concern to the behaviour of 2d elements relative to walls and vaults. the usual strategy for the adjustment consists of generating triangular meshes from points cloud and evaluating the map of unit normal vectors which plays the role of dual representation of the geometric object [11]. the most common problem for walls is the lack of verticality or bending effects, especially common in heterogeneous walls. in this case, a dense points cloud allows to compare the resulting triangulation w.r.t. an ideal vertical plane (perpendicular to ground), and to evaluate differences w.r.t. a defined threshold. figure 4: 1) planar segmentation for detecting plane primitives in a house suspicious of structural problems. 2) another view of the same house showing the lack of perpendicularity at the main façade. ________________________________________________________________________________ geoinformatics ctu fce 2011 129 curved vaults display some additional troubles. deviations must be evaluated w.r.t incidence conditions for normal vectors. the affixes of the unit normal vectors can be represented as points of a sphere of unit radius. the concentration of affixes of normal vectors in the gauss sphere around a fixed value reveals the existence of a spherical piece, whereas the concentration around a meridian is equivalent to the existence of a cylindrical piece. for a ribbed vault (intersection of two cylinders), the affixes of normal unit vectors would be located along two meridians in the gauss sphere. obviously, more complicated vaults can be geometrically interpreted as ruled or as translation surfaces (of a line in regard another line, e.g.) which are necessary for slabs construction. however, we shall reduce to the most common cases linked to ordinary vaults which can be found in romanesque and gothic styles, where we have several meaningful examples. figure 5: map of “almost-normal” unit vectors fitted to a sphere (1) and cylinder (2). normal directions of sphere pass centre and normal directions of cylinder cut out the axis. due to the discrete approach performed by a superimposed triangulation shown in figure 5 (noise is irrelevant and regularly distributed), accuracy cannot be expected in intersection properties, but only a concentration of intersections around the representation in the gauss sphere. two relevant remarks for structural analysis are the following ones: a) bars of the structure (columns or vaults nerves) must be treated independently to avoid outliers in triangles connecting points of vaults with points of the other structural elements (columns and vault nerves); currently this task is manually performed by cutting out the meaningful region. b) after clipping, a treatment of surfaces corresponding to vaults is performed to lod: coarsest for detecting defects in global shape and finest for measuring local deviations w.r.t. the ideal model. figure 6: low-resolution models provide a coarse approach to the original surface with the corresponding triangulations which simplifies the detection and suppression of outliers for shape estimation. outliers are points whose difference w.r.t. "expected property" (model hypotheses to validate) is larger than a threshold. in a first stage of sampling, outliers cannot be removed according to the expected distribution. contrarily, an important issue is to identify if the observed deviations are due to spurious facts (including noisy information) or they arise from true structural problems; they must be deleted after inspection. ________________________________________________________________________________ geoinformatics ctu fce 2011 130 figure 7: 1) normal vectors of triangles for a low-resolution sample of a spherical vault give the concentration of their intersections around a point at the centre of a sphere. 2) two consecutive slices of a cylindrical element from a low-resolution sample of a column determine a slab. the intersection of normal vectors gives points which would lie ideally on the cylinder axis. meaningful deviations of the line connecting them are reported 5. adaptative smart samplig with importance functions before validating and propagating hypotheses, it is necessary to design a smart procedure to initialize the process, select appropriate thresholds and identify the samples size. further traditional suppression of outliers, this implies a careful analysis of sampling procedures and an evaluation of the convergence rate. in this section we sketch a general strategy for smart sampling based on importance functions and progressive sampling. traditional blind sampling follows random criteria. hence, it ignores irregularities in distribution which can be due to the distance, relative orientation or materials reflectance, e.g. this sampling is highly efficient but it does not have into account neither irregularities in distribution nor information about proximity conditions in triangulation, such those corresponding to nerves or columns in vaults. our strategy is performed in two steps: 1) progressive sampling consensus, which selects the minimal number of points for hypotheses generation; 2) importance functions, which correct irregularities and hypotheses validation in robust models by propagating conditions in regularly distributed samples. the process starts with a lowresolution sample obtained from successive decimation. hypotheses generation makes reference not only to the minimal number of points (3 for a plane, 4 for a sphere or a cylinder) in general position, but also to the adjacent ones (the nearest ones to the 3 or 4 selected following nearest neighbours search procedures). this choice is justified because it is necessary to propagate the geometric condition to the nearest points in order to verify adjustment to planarity, spherical and cylindrical conditions. for example, if the process starts with a sample of n points (around 500 for previously segmented curved surfaces, e.g.), valid subsamples must verify the appropriate incidence conditions for normal unit vectors: parallelism to dominant planes or incidence w.r.t. the centre of the ideal sphere or the axis of the ideal cylinder. if the results were appropriate, they propagate to other subsamples which are randomly chosen. for a curved object, the strategy consists of selecting a pivot point p0 and extracting three points from two levels of proximity. then, properties are identified from the 4 tetrahedral sharing the same pivot point. more precisely, the 3 nearest points to the pivot p 0 are identified and triangulated, the normal unit vectors for each tetrahedral configuration are computed and the intersection of normal directions (up to a threshold) are stored as putative “centre”. this process is repeated for the nearest points of the above three nearest points, sharing and verifying the incidence hypothesis for normal directions supported by at most three adjacent tetrahedral which shares the pivot. a spherical shape must display a concentration of intersections around a point (the putative centre of the sphere) up to a threshold, whereas a cylindrical shape must display the alignment of intersections along a line (the putative axis of the cylinder). if the sample does not display any kind of pattern, it is rejected. otherwise, it is accepted as a valid sample. next, the process chooses a second sample and the previous steps are repeated. for previously segmented low-resolution clouds the convergence is very fast (8 or 10 samples). the final output of this process must be an initial estimation of geometric parameters. the second step is focused in achieving a more regular distribution, by correcting irregularities due to different factors. to avoid the suppression of meaningful information in early stages of processing, we propose three criteria involving: a) the absolute distance: higher weight to the farthest points; b) the relative orientation: points lying on skew planes must be weighted with a higher value than those on frontal planes; the relative orientation is computed from the normal unit map linked to the triangulation, and only triangles with a normal vector with angle inferior to 45º w.r.t. view line are considered; c) the light brightness: photonic answer of objects depends on object reflectance so a higher weight to darkest points must be given. this step can be performed by introducing a marginal probability for choice in terms of the difference or the ratio with respect to the local maximum; in this way, we obtain a simple reformulation of inverse proportionality arguments arising from an ideal model. the implementation of any kind of importance functions requires a previous lecture for identifying extreme points, and the generation of auxiliary tables for comparing and updating information. the detection of extreme points (a very low level description of boundary) allows to bound the valid regions, estimate the ________________________________________________________________________________ geoinformatics ctu fce 2011 131 global distribution of errors and focus the attention on the distribution of irregularities inside the bounded region. for regularly distributed clouds with gaussian noise, a simple product of normal distributions would give the global distribution of errors linked to n correspondences. however, this is not the case so it is necessary to give relative nonnegative weights wd, wo and wr linked to the distance, orientation and reflectance properties which verify wd + wo + wr = 1. for the collection of valid samples obtained in the first stage, a partial distribution of errors p d, po and pr is obtained. this distribution is weighted according to wd, wo and wr with a global distribution of errors wdpd+ wop + wrpr. usual impsac resolution follows maximum likelihood optimization criteria (in terms of gradient steepest descent, e.g.); we have adapted this approach to its formulation in the framework of multicriteria optimization. however, a more detailed analysis is still needed to avoid false minima. 5. conclusions this paper introduces a methodology for the low-level recognition of planar and curved shapes presents in a large part of cultural heritage religious buildings which have been constructed between xi and xvi centuries. the methodology includes an adaptive and smart sampling filter of point clouds based on importance functions. the method generates uniformly distributed sampling points by removing or creating points in heterogeneous point clouds resulting from laser scans. a special attention is focused on structural defects involving mechanical efforts for detecting, evaluating and representing their influence on the whole fabric of the building. experiments over architectural heritage scenarios show the goodness in terms of quality and accuracy of this approach and the valuable support provided to experts at the evaluation of the state of structural elements. moreover, an adaptive sampling based on importance functions offers several possibilities which motivate further research. for example, it allows the automatic recognition and segmentation of point clouds according not only to geometric primitives, but also with the volumetric density of information. the resulting balanced segmentation in several clouds allows decomposing and distributing tasks according to different characteristics and resolution levels. also, this helps to store and retrieve the digital artefacts which compose cultural heritage buildings. besides, the established framework allows to include more complex shape estimations based in incidence properties of lines or even to perform curvature analysis for ornamental details. in presence of more complicated map of normal behaviour, it would be convenient to introduce the willmore energy functional to evaluate the lack of sphericity. maybe some other adjustments of importance functions are needed. 6. acknowledgements this work has been partially supported by the adispa (cicyt) project with reference bia2009-14254-c02-01. 6. references [1] schnabel, r. and wahl, r. and klein, r. efficient ransac for point-cloud shape detection. computer graphics forum, 26 (2), 214--226, acm. [2] bao-quan shi, jin liang, qing liu. adaptive simplification of point cloud using k-means clustering. computeraided design 2011. accepted manuscript, issn 0010-4485, elsevier. [3]p.h.s. torr and c. davidson, impsac: synthesis of importance sampling and random sample consensus, presented at ieee trans. pattern anal. mach. intell., 2003, pp.354-364. [4] m. a. fischler and r. c. bolles. random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. comm. of the acm 24: 381–395. 1981. [5] bao-quan shi, jin liang, qing liu. adaptive simplification of point cloud using k-means clustering. computeraided design 2011. accepted manuscript, available online 9 april 2011, issn 0010-4485, elsevier. [6] d. a. forsyth and j. ponce. computer vision, a modern approach. prentice hall. 2003. [7] r. hartley and a. zisserman (2003). multiple view geometry in computer vision (2nd edition). cambridge university press. [8] j.d. pérez-moneo, j. finat, j.j. fernández, j. i. san josé and j. martínez. uvacad: a software platform for 3d architectural surveying. xxi cipa international symposium: anticipating the future of the cultural. vol. xxxvi5/c53 isnn 1682-1750. athens 2007. [9] j. i. san josé, j.d. pérez, j. finat, j.j. fernández and j. martínez. evaluation of structural damages from 3d laser scans. xxi cipa international symposium: anticipating the future of the cultural, athens, 2007 [10] vosselman, g. and gorte, b.g.h. and sithole, g. and rabbani, t. recognising structure in laser scanner point clouds. international archives of photogrammetry, remote sensing and spatial information sciences 46 (8), 33—38, 2004. [11] pauly, m. and gross, m. and kobbelt, l.p. efficient simplification of point-sampled surfaces. visualization, 2002, 163--170, ieee. ___________________________________________________________________________________________________________ geoinformatics ctu fce 300 multi-scale modeling of the basilica of san pietro in tuscania (italy). from 3d data to 2d representation filiberto chiabrando1, dario piatti2, fulvio rinaudo2 1politecnico di torino, dinse viale mattioli 39, 10125 torino, italy, filiberto.chiabrando@polito.it 2 politecnico di torino, ditag corso duca degli abruzzi 24 , 10129 torino, italy, dario.piatti@polito.it, fulvio.rinaudo@polito.it keywords: lidar, 3d modeling, digital photogrammetry, restoration abstract: the basilica of san pietro is a romanic architecture located in the municipality of tuscania in the lazio region about 100 km far from rome. in 1971 the apse dome collapsed during the earthquake and the important fresco of a christ pantocrator was destroyed. in 1975 the dome was reconstructed using reinforced concrete.in 2010 an integrated survey of the church has been performed using lidar techniques integrated with photogrammetric and topographic methodologies in order to realize a complete 2d documentation of the basilica of san pietro. thanks to the acquired data a complete multi-scale 3d model of the church and of the surroundings was realized.the aim of this work is to present different strategies in order to realize correct documentations for cultural heritage knowledge, using typical 3d survey methodologies (i. e. lidar survey and photogrammetry).after data acquisition and processing, several 2d representations were realized in order to carry out traditional supports for the different actors involved in the conservation plans; moreover, starting from the 2d drawing a simplified 3d modeling methodology has been followed in order to define the fundamental geometry of the basilica and the surroundings: the achieved model could be useful for a small architectural scale description of the structure and for the documentation of the surroundings. for the aforementioned small architectural scale model, the 3d modeling was realized using the information derived from the 2d drawings with an approach based on the constructive solid geometry. using this approach the real shape of the object is simplified. this methodology is employed in particular when the shape of the structures is simple or to communicate new project ideas of when, as in our case, the aim is to give an idea of the complexity of an architectural cultural heritage. in order to follow this objective, a small architectural scale model was realized: the area of the civita hill was modeled using the information derived from the 1:5000 scale map contours; afterwards the basilica was modeled in a cad software using the information derived from the 2d drawings of the basilica. finally, a more detailed 3d model was realized to describe the real shape of the transept. all this products were realized thanks to the data acquired during the performed survey. this research underlines that a complete 3d documentation of a cultural heritage during the survey phase allows the final user to derive all the products that could be necessary for a correct knowledge of the artifact. 1. introduction the importance of the basilica of san pietro in tuscania has placed the attention to the issue of the integration of different survey techniques such as lidar, photogrammetry, total station and manual measurements with particular interest to the lidar technique in order to build up a complete and multi-scale documentation of the site. the already achieved experiences (e. g. [1]) demonstrate that only a suitable integration between the available technologies allows a complete and exhaustive collection of metric information to be achieved. the acquired data can be used to produce a correct documentation such as traditional 2d drawings (plans, sections and façades), 3d models and videos able to transmit the correct shape of the surveyed object to different skilled people. the evolution of lidar and digital photogrammetry techniques forces to move the selection of the needed information from the field to the office, after all the measurements have been already done. this fact speeds up the acquisition phases but drastically increases the time needed to extract useful information with the different degrees of detail and accuracy required by the different considered representation scales. both lidar technique and digital photogrammetry allow fast acquisitions of a big amount of metric information which can be used to produce 2d and 3d models at different nominal scales [2,3,4,5]. at the end of the acquisition phase, a metrically coherent archive of information can be realized from which geometric and semantic information can be extracted at different scales up to the maximum scale allowed by the employed acquisition techniques. in the proposed example the 2d drawings and the 3d models were realized at the maximum possible scale ___________________________________________________________________________________________________________ geoinformatics ctu fce 301 for some interesting details of the structure; more general 3d models and videos where also produced, which are useful to explain to visitors the relationship between the land context and the single architectural artifacts. in the proposed approach, both raw lidar data and the photogrammetric data are shared besides the final products, in order to allow possible integration and deeper information extraction by other interested users. 2. historical background the survey of the site of san pietro in tuscania gives the possibility to verify the multi-scale approach used to build up a metric and semantic archive of information acquired by using lidar and digital photogrammetry techniques. the san pietro site is composed by different structures with heterogeneous architectural styles realized over some centuries for historical reasons: the basilica, two medieval towers and a palace used in the past as bishop main site. the basilica is a romanic style building with three naves: the most ancient nucleus (xi cent. a. c.) is represented by the crypt, which was realized under the actual settlement of the basilica, the apse and the first part of the naves. it is surrounded by the ruins of the ancient bishop citadel and it is placed on the top of the civita hill (figure 1 left) in a rural area outside the current urban area of tuscania; the basilica was the religious center before the gradual abandonment of the area. in the xiii century the basilica reached the actual shape: the three naves were protracted and the main façade was realized by using a monumental decorative apparatus (figure 1 right). in 1971 an earthquake caused the collapse of the apse halfdome (figure 2 center and right) and the damage of a medieval fresco representing a christ pantocrator (figure 2 left). figure 1: a view of the civita hill (left) and the main facade of the basilica di san pietro (right). in 1975 a first restoring intervention fitted a concrete half-dome in order to stop the deterioration of the basilica. during the earthquake all the other structures were subjected to non-structural damages; therefore, after the restoration, the site was re-opened to the public. figure 2: the old fresco (left) and the apse after the earthquake in 1971 (center and right). ___________________________________________________________________________________________________________ geoinformatics ctu fce 302 3. the 3d survey of the basilica the data acquisition was planned considering a complete survey of all the structures of the site. the shape and the size of the details to be recorded suggested the authors to plan all the acquisition phases by considering a final accuracy of 2 cm. therefore, the following aspects were considered: a control network was realized by means of total station and rigorously adjusted in order to estimate the final accuracy on the vertexes. all the possible distances and horizontal angles were considered in order to have redundant measurements and a local coordinate system was established; the lidar acquisition stations were planned in order to reach the minimum number of locations and to realize a complete point cloud acquisition. also the resolution of each scan was fixed to properly describe the different surfaces of the buildings and structures; natural and ad-hoc signalized points were chosen on the surface to guarantee a good registration of the acquired point clouds. that points were surveyed by using the total station to connect all the acquired lidar point clouds in the local coordinate system; each point cloud was integrated with the radiometric information coming from the digital images in order to have colored point clouds; moreover, some overlapping images were used in order to integrate lidar data by using a photogrammetric approach. 3.1 the control network the main role of the control network is to establish a unique and stable coordinate system able to satisfy all the accuracy requirements for a complete architectural 3d survey. in the area of the basilica of san pietro a total station survey was realized according to the shape of the object: the vertexes of the network were placed in order to have the maximum visibility of the other vertexes. each point of the control network was documented by means of witnessing diagrams to allow future use of the same network. the vertexes were materialized and hidden in order to preserve them from natural and men actions. the obtained control network was composed of 12 vertexes: 5 outside and 7 inside the basilica. the distance and angle measurements were realized by using leica total stations (ts02 and ts06) following accurate procedures in order to eliminate systematic errors and gross errors. after the control network adjustment the mean square error (m. s. e.) of all the estimated coordinates is less than 5 mm, therefore suitable for the survey purposes. 3.1 integration surveys all the points useful to orient the overlapping images and to register the point clouds were materialized by using reflective targets (pasted on the structure) or by using natural points in order to establish a robust connection between lidar and photogrammetric data. by using the total stations the coordinates of those points were measured simply by using single collimations from the vertexes of the control network. the surveyed control points were integrated with a set of check points useful to verify the real achieved accuracy after the modeling phase at the end of the work. with the same instrumentation and techniques all the portions of the site without special requirements for modeling (e.g. low level of details) were measured by selecting only the information useful to make a complete 3d model at the selected scale. also the geometric elements necessary to describe the upper parts of the towers were measured by using the above described methodology in order to integrate the lidar survey. 3.2 lidar acquisition the lidar acquisitions were performed by using the riegl lms-z420 scanner. in order to carry out a complete survey of the basilica, fifteen different scan positions were planned and realized: seven outdoor and eight indoor to record all the required details of the building. each scan was previously verified in terms of overlaps between adj acent scans in order to reach an overlap of at least 30%, which is necessary to guarantee an accurate registration of the scans. moreover, several reflective targets were positioned in such a way that at least three of them were visible for each scan. each acquired point cloud was integrated with the digital images acquired by using a calibrated camera placed on the top of the scanner. in this case a canon eos 5d camera with 24 mm focal length optics was used. the overlap between adjacent images acquired by the camera was fixed in order to ensure a correct and complete radiometric mapping of the point clouds. to document the lidar acquisition phase, each scan was described by means of a table which contains the needed information to understand the content of the scan. ___________________________________________________________________________________________________________ geoinformatics ctu fce 303 4. 2d drawing and multiscale 3d model production first of all in order to obtain a suitable product for the realization of the drawings the lidar data were processed. the registration of the fifteen recorded point clouds was realized by using the riscan pro software in order to reach an approximate 3d model of all the acquired surfaces. during this phase all the visible reflective targets were employed to give a preliminary solution, than an icp approach was used to refine the registration. the first comparisons of the achieved accuracies performed by using the check points show discrepancies higher than expected, therefore a complete compensation of the scans considering indoor and outdoor scan as a block was realized. figure 4: 3d models of the basilica of san pietro. the six registration parameters have been than re-estimated by using a rigorous least squares approach [1]. after that, the discrepancies on the check points showed values lower than 2 cm. the editing of the complete point cloud was performed by using the geomagic studio software. automatic and manual procedures were employed in order to reduce the number of points in regular surfaces and to extract the break-lines. during this latter part of the procedure also the points acquired during the total station survey were integrated and, in some cases, manual and photogrammetric measurements were used in order to define the missing details [6,7]. finally the data were separated in two different models: one of the whole site (which has been modeled at 1:200 representation scale, figure 4 left) and one of the interior and exterior part of the basilica (which has been modeled at 1:50 representation scale, figure 4 right). both sets of data points and break-lines were used to produce the tin (triangulated irregular network) surfaces with different resolution. 4.1 realization of plans, sections and facades the obtained 3d models delivered an accurate description of the surveyed object and where therefore used to extract different kind of representations. 2d drawings are the most required final products and their production by using 3d models always needs a huge cost in term of time and competencies. the first problem to be faced, especially when architectonic scale (e.g. 1:100 or 1:50) has to be used, is the integration of the information in order to build up a correct description of the object. figure 5 shows the well-known problem between sections extracted from point cloud models and the correct geometry which has to be realized. figure 5: section extracted from the 3d model (red line) and needed model (blue line) (left), horizontal sections generated on the 3d model with the geomagic studio software (right). ___________________________________________________________________________________________________________ geoinformatics ctu fce 304 in order to minimize the number of scans the surveyor has to integrate the foreseen lack of information of lidar scans by using other techniques (e.g. manual measurements, total station survey or photogrammetric technique). obviously a good knowledge of the fundamental of architecture is required to give a satisfactory result. therefore, the survey team has to be composed of both architecture experts and survey experts, since it is not easy to find all these competencies merged in a unique person. when the interpreted integrations are realized on the 3d model, the generation of 2d drawings can be speeded up and the final results show the advantage of using an integrated approach instead of a traditional one. the user can choose the needed position of the intersecting planes and can produce coherent drawings by actually extracting 2d information from a unique 3d reality (figure 5 right). when drawings at lower representation scales have to be generated, the operator has to clearly understand, which are the details that have to be simplified and to adopt the correct symbols and graphical conventions [8]. in the case of the basilica of san pietro an accurate integration between lidar data, total station data, photogrammetric data and manual measurements were used in order to produce the final 2d drawings of the building. all the phases, from the measurements to the 2d representation, were conducted by a multidisciplinary team (engineers, architects and historical experts) in order to represent the correct information and to achieve the proper drawings. two examples of the achieved 2d products are reported in figure 6: a plan carried out using the section extracted from the lidar survey integrated with some manual measurements (figure 6 left) and a façade realized using the information derived by the lidar data integrated with the photogrammetric and total station measurements (figure 6 right). figure 6: plan of the basilica (left) and north façade of the basilica (right). 4.2 the urban scale 3d model of the san pietro site 3d models are not yet a way to produce 2d drawings but are more and more used in order to increase the knowledge of architecture and to diffuse the achieved results to people which are not used or able to understand 3d reality by means of 2d drawings. the main advantage of a 3d model is the possibility to better describe the spatial intersections and relationships between the architectural elements. actually, 2d drawings are not able to give a complete idea of the 3d shape of the surveyed object. 3d models allow a direct inspection of the surveyed space to be performed thanks to 3d videos and 3d navigation tools; moreover, the radiometric information gives the user the possibility to simulate a real visit up to all the details as interpreted by the surveyor. the possibility to see also 3d realistic models generated by using photogrammetry allows also the user to integrate or correct 3d models. in the case of the basilica of san pietro a multi-scale approach was employed: first of all in order to give an overview of the hill where the baslilica is located and a first idea of the shape of the architecture, a simplified 3d model at urban scale was realized using the data derived from the 2d drawings and the information of the surroundings derived from the regional carthography at 1:5000 scale. this first model was achieved using the 3d studio max software (figure 6 left). afterwards the realized 2d representations were employed to carry out a simplified model of the basilica: all the parts of the basilica were modeled using cad software [9,10] (figure 6 right). figure 6: 3d model of civita hill and surroundings (left) and 3d model of the basilica at 1:200 scale (right). ___________________________________________________________________________________________________________ geoinformatics ctu fce 305 the buildings were modeled at 1:200 scale considering the conventional contents of this scale: all the 3d information outside and inside the buildings were reduced to simple geometric shapes in order to allow a good comprehension of the space and of the relationships between the different structures of the site. finally, the 3d model of the basilica site was inserted into the cartographic coordinate system by using homologous points between the available cartography and the 3d model; in this way, a complete overview of the area was realized (figure 7 left). special attention was paid to the use of lights and shadows in order to give a clear view of the different locations of the structural elements inside (figure 7 right) and outside the basilica. also the selection of the correct illumination parameters is a topic which needs experts able to understand the fundamental elements to be underlined for a correct and complete transmission of the achieved knowledge of the surveyed structure [11,12,13,14]. figure 7: 3d urban model (left) and rendering of the basilica principal nave (right). 4.3 the 3d model of the transept of the basilica after the realization of the urban scale 3d model, in order to describe with an higher level of detail the shape of the interior of the basilica, an high resolution 3d model was achieved using the integration between lidar data, photogrammetric data and topographic data. the modeling phase was focused on the transept of the basilica. for this purpose, all the high resolution data were used and several integrations were realized using the data extracted from stereoscopic photogrammetric models in order to recover the hidden parts in lidar scans. in this detailed description (figure 8 left) the light and shadows are of strategic importance to show the results of the survey to the final user. the 3d model phase was conducted using different software: the geomagic studio software for the lidar data processing and editing on the mesh; the rhinocerous software for data integration and merging and finally the 3d studio max software for the rendering phases. in order to speed up the navigation inside the model the amount of data was actually reduced by only mapping the relevant surfaces with the high resolution rgb images (figure 8 right). figure 8: rendering of the transept (left) and textured rendering of a painted wall (right). ___________________________________________________________________________________________________________ geoinformatics ctu fce 306 5. conclusions 3d models are today the best answer to the complete knowledge of architectural artifacts. the possibility to build up 3d models and videos with a selected level of information is one of the main advantages of this kind of approach. one important aspect to be pointed out is the time needed to produce this kind of 3d models. in our case, after a week necessary for data acquisition, more than three months of two skilled operators were spent in order to reach the presented results. the above described experience was developed by using a rigorous approach: first, all the data were documented in order to allow a possible re-use and/or integration by other surveyors; then, several survey techniques were used in order to take the best advantages from each of them; moreover, a check of the final accuracy was realized in order to verify the suitability of the 3d geometric model; finally, the data interpretation and the final products were realized by people with special skills on metric survey and architecture. it is worth remembering that the needed data for a correct modeling phase are related to the level of detail of the final model to be obtained. therefore, the survey operations have to be designed accordingly to the level of detail of the requested 3d model. for the creation of a simplified 3d model a complete lidar survey is not necessary, since a traditional topographic survey with some photogrammetric integration could be enough. on the contrary, if a high resolution model is requested, a complete survey of the object using suitable survey techniques is necessary in order to accurately describe the shape of the object. finally, in order to create photorealistic rendering and real videos of the investigated area, it is necessary to plan and realize a complete image acquisition campaign. 6. references [1] chiabrando, f., nex, f., piatti, d., rinaudo, f.: integrated digital technologies to support restoration sites: a new approach towards a standard procedure, digital heritage proceedings of the 14th international conference on virtual systems and multimedia, limassol, cyprus 20-25 october 2008, 6067. [2] beraldin, j.a., guidi, g., russo, m.: acquisizione 3d e modellazione poligonale, milano, mc graw-hill, 2010. [3] fregonese, l., taffurelli, l.: 3d model for the documentation of cultural heritage: the wooden domes of st.mark’s basilica in venice, isprs archives, volume xxxviii-5/w1, 2009. [4] el-hakim, s.f. beraldin, picard, m., godin, g.: detailed 3d reconstruction of large-scale heritage sites with integrated techniques. ieee computer graphics & applications, vol. 23(3), 2004, 21-29. [5] fassi, f., achille, c., fregonese, l., monti, c.: multiple data source for survey and modeling of very complex architecture. isprs archives, vol. xxxviii, part 5 ,2010, 234-239. [6] remondino, f., zhang, l.: surface reconstruction algorithms for detailed close-range object modeling. isprs archives, vol. 36(3), 2006, 117-123. [7] el-hakim, s., beraldin, j.-a., picard, m., cournoyer, l.: surface reconstruction of large complex structures from mixed range data the erichtheion experience. isprs archives, vol. 37(b5), 2008, 1077-1082. [8] de bernardi, m. l.: la forma e la sua immagine, pisa, edizioni ets, 1997. [9] kimura, f.: geometric modelling: theoretical and computational basis towards advanced cad applications, springer-verlag, 2001. [10] yin, x., wonka, p., razdan, a.: generating 3d building models from architectural drawings. ieee computer graphics and applications, 29(1), 2008, 20-30. [11] lo turco , m., sanna, m.: digital modelling for architectural reconstruction. the case study of the chiesa confraternita della misericordia in turin. proceedings of cipa 2009, kyoto, 101-106. [12] klette, r., scheibe, k.: combinations of range data and panoramic images new opportunities in 3d scene modelling, computer graphics, imaging and vision: new trends, sarfraz, m., wang, y., banissi, e. (eds.), proc. of cgiv 2005, 3-10. [13] lensch, h., heidrich, w., seidel, h.: automated texture registration and stitching for real world models. proc. 8th pacific graphics 2000 conf. on computer graphics and application, 317-327. [14] grammatikopoulos, l., kalisperakis, i., karras, g., kokkinos, t., petsa, e.: on automatic orthoprojection and texture-mapping of 3d surface models. isprs archives, 35(5), 2004, 360-365. http://www.isprs.org/proceedings/xxxviii/5-w1/pdf/fregonese_taffurelli.pdf http://www.isprs.org/proceedings/xxxviii/5-w1/pdf/fregonese_taffurelli.pdf http://www.isprs.org/publications/archives.html http://www.isprs-newcastle2010.org/papers/170.pdf http://www.isprs-newcastle2010.org/papers/170.pdf http://www.isprs-newcastle2010.org/papers/170.pdf _______________________________________________________________________________________ geoinformatics ctu fce 2011 157 combining a virtual learning tool and onsite study visits of four conservation sites in europe a. chenaux1, m. murphy1, g. keenaghan1, j. jenkins3, e. mcgovern1, s. pavia2 1 dublin institute of technology, ireland 2trinity college dublin, ireland 3purdue university, usa alain.chenaux@dit.ie, maurice.murphy@dit.ie keywords 3-d modeling, visualization of cultural heritage, virtual learning environment, hbim abstract: the design and evaluation of virtual learning environments for construction and surveying students is presented in this paper; by combining virtual learning environment and on-site student surveys to model and replicate practice in the architectural heritage sector. the virtual learning environment is enhanced with real live survey projects whereby students collect the data to build virtual historic buildings from onsite surveys using advanced survey equipment. the survey data is modelled in hbim; historic building information modelling (hbim) is currently being developed as a virtual learning tool for construction and surveying students in the dublin institute of technology. hbim, is a novel solution whereby interactive parametric objects representing architectural elements are constructed from historic data, these elements, including detail behind the scan surface are accurately mapped onto a laser or image based survey. the architectural elements are scripted using a geometric descriptive language gdl. in the case of this project a virtual learning environment is being developed which combines advanced recording and surveying with building information modelling (bim) to simulate and analyse existing buildings. 1. introduction the aim of this paper is to present the outcome of four study visits of dublin institute of technology and purdue students to four conservation sites in europe: leaning tower of pisa, national monument edinburgh, karolinum university building of prague, henrietta street dublin and the subsequent development of a virtual learning environment using both the survey data and building information models (bim). every year students from the dublin institute of technology in ireland carry out a survey of a historic building in europe. the aim of the survey is to introduce students to the principles of building conservation in other eu states. the onsite surveys and reference to existing survey data, also give the students an insight into restoration techniques and the methods of construction used in these buildings over their lifespan. the modelling system described as historic building information modelling (hbim) [1,2,3] is a novel prototype library of parametric objects based on historic data and a system of cross platform programmes for mapping parametric objects onto a point cloud and image survey data. the hbim process begins with remote collection of survey data using a terrestrial laser scanner combined with digital photo modelling. the next stage involves the design and construction of a parametric library of objects, which are based on the manuscripts ranging from vitruvius to 18th century architectural pattern books [4]. in building parametric objects, the problem of file format and exchange of data has been overcome within the bim archicad software platform by using geometric descriptive language (gdl). the plotting of parametric objects onto the laser scan or other surveys as building components to create or form the entire building is the final stage in the reverse engineering process. the final hbim product is the creation of full 3d models including detail behind the object‟s surface concerning its methods of construction and material make-up. the resultant hbim can automatically create cut sections, details and schedules in addition to the orthographic projections and 3d models (wire frame or textured). in the case of this project, a novel aspect is introduced through applying both traditional and advanced recording and building modelling techniques developed in the dublin institute of technology to develop a virtual learning environment to simulate and analyse historic buildings. the four surveys are described in this paper detailing the methods of on-site and off-site data collection and the process of building partial models of the buildings. in conclusion a theorethical design framework is described for combining the onsite surveys and the historic building information modelling to create a virtual learning environment which has the potential to expand and be shared by other educational institutes. mailto:alain.chenaux@dit.ie mailto:maurice.murphy@dit.ie _______________________________________________________________________________________ geoinformatics ctu fce 2011 158 figure 10: students on-site surveying and hbim model 2. survey: leaning tower of pisa the hbim process in this case was divided into three steps: the first was based on existing laser-scan survey data and 2d digital photo-modelling; the second based on ground truth survey data collection using theodolites and hand-held laser measurement units (distometers) and finally stage three, the creation of historic building information model of the tower [5]. 2.1 step 1existing survey data photographic and laser survey of the building the identification of building typology, architectural detail, geometry, principles of the external and internal structure and fabric, positioning of openings, proportional relationship of the building's elements and classical detailing were initially based on existing laser scans, photographies and cad surveys of the historic structures downloadable from the website of cyark, a non-profit entity (http://archive.cyark.org/). cyark's mission is to digitally preserve cultural heritage sites through collecting, archiving and providing open access to data created by laser scanning, digital modeling, and other state-of-the-art technologies. the cyark's database allowed the students to access information such as the data detailed in figure 2. the product of the laser scan survey is described as a point cloud, which represents the x, y, z coordinates of a scanned object (figure 2). the point cloud can then be textured from image data to create a virtual 3d model of a structure or object; accurate measurements can be abstracted from the point cloud. existing 2d cad drawings and historic surveys as detailed in figure 3 were also used to obtain measurements of the structure. http://archive.cyark.org/ _______________________________________________________________________________________ geoinformatics ctu fce 2011 159 figure 11: laser scan survey of the leaning tower of pisa figure 12: historic survey data _______________________________________________________________________________________ geoinformatics ctu fce 2011 160 digital photography was used for 2d/3d modelling and independent data collection. the 2d modelling was achieved through taking accurate ground truth survey measurements on the object and geo-referencing the images using target measurements on the image representing x and y coordinates at a minimum of four points as detailed in figure 4. figure 13: geo-referenced image of tower section 2.2 stage 2collection ground truth survey data using theodolite and hand held laser a traditional surveying approach was taken involving the use of 10” theodolites to record angles in both the horizontal and vertical planes and laser distometers to obtain distances. a framework of control stations was first established around the tower of pisa. when deciding upon the station locations common surveying rules were respected including intervisibility between stations, the formation of a well-conditioned control framework and checking that each point of interest could be visible from at least two stations [6]. figure 5: control framework horizontal and vertical angles as well as distances were recorded between survey stations and coordinates were calculated in an arbitrary coordinate system based on the traverse calculations. a fractional linear misclosure better than 1:10‟000 was computed which would be within the allowable range for large construction projects [7]. the intersection method was then applied to determine the relative position of detail points on the tower once both horizontal and vertical angles were observed with the theodolite. each detail point p was observed from two separate stations and both horizontal and vertical angles were recorded (figures 6 and 7). as a check both faces were considered allowing for a 1‟ error. once all the data was recorded students created their own functions within excel to compute the relative position of all points observed (figures 6 and 7). since the vertical angle of each detail point was observed from two different control stations the height can be computed for each of the two instrument positions a and b. a consistent difference of a few centimetres was present between two measurements of the same height coordinate. _______________________________________________________________________________________ geoinformatics ctu fce 2011 161 figure 14: intersection method in the horizontal plane and equations used to compute x, y coordinates figure 15: equation used to compute z coordinate the geometry outline of some decayed features proved to be challenging when deciding upon detail points to be recorded with the instruments. in this regard, laser distometers were utilised for recording distances for important architectural features where accessibility was a problem. 2.3 pisa building a partial model of the data from the on-site survey, ortho-images, and laser scan data were used to obtain the measurements for this model and were plotted in a 3d cad environment. a circle was constructed based on the diameter of the tower (15.484m), from this a pendedecgon (a 15-sided, 15-angled, polygon) was positioned to represent the positions of the 15 columns on the base level (see detail a figure 8). once the pendedecgon was completed, the base of the column was constructed first in 2d and then in 3d (see detail b). a similar process was used for the capitol of the column, initially in 2d, and then primitive shapes were positioned to construct the 3d model of the capitol (detail c). next, the capitol and base were positioned and the shaft of the column was drawn with a cylinder to create the full the 3d model of the column (detail d). for the construction of the arch a boolean operation combined a half cylinder which was subtracted from a solid cuboid to replicate the arch (detail e). when the objects were constructed, the positioning of each element was located using the pendedecgon to position the object, the function array was used to position the 14 other columns on their correct locations. similarly, the arch block was positioned on top of two columns and was once again arrayed to fill the entire area symmetrically, resulting in the final model illustrated in detail f. and g. it must be noted that the objects used to plot the 3d model (see detail g) are not parametric, the concept of using parametric objects are introduced in the building of the next three models. _______________________________________________________________________________________ geoinformatics ctu fce 2011 162 figure 16: hbim of base of pisa tower 3. survey: national monument edinburgh the second study is a survey of the national monument constructed in 1826. it is located in edinburgh and is a napoleonic war memorial, which is partially modelled on the parthenon in athens. the students were required to carry out a survey of the elevation of the building using total station, laser and image techniques, producing a full set of 3d drawings. initially historic research and background which included identified building typology and architectural documents which detail: geometry and principles of the external and internal structure and fabric construction; positioning of openings; proportional relationship of the building's elements; and classical detailing. a leica ts1205 reflectorless total station and digital photography was used for 2d/3d modelling and independent data was collected using laser hand held measurements on the object and rectifying the images to scale using target measurements on the building. when recording data with the total station an arbitrary coordinate system was created from a baseline running between two control stations. the coordinates of all visible detail points were recorded from the first control and stored in the instrument‟s internal data logger. the use of a second control station allowed for recording points, which were either obstructed or poorly visible from the first station. when detail points were recorded from both stations their initial coordinates from the first station were compared with the coordinates from the second station. differences in the horizontal plane were in the order of 10mm and 20mm in the vertical plane. the 3d modelling of objects and extraction of measurement data was based on the use of google sketch-up and archicad building information modelling software platforms. in figure 9, the terrain model was constructed using google sketch-up, a. and b. detail the solid construction and meshing process using triangles, the terrain is modelled in c. and the model is put in place in e. _______________________________________________________________________________________ geoinformatics ctu fce 2011 163 figure 17: hbim of national monument the process for constructing the 3d bim model is detailed in figure 10. the x, y and z co-ordinates are transferred to data sheets which represent the geometry of the objects which make up the entire model. mapping objects based on survey data can overcome the slow task of plotting and locating every vector to represent the 3d object. as parametric objects there is an opportunity to introduce details behind the object‟s surface concerning its methods of construction and material make-up. within historic building information modelling, the library of parametric objects exists in libraries or has to be constructed by editing existing objects or scripting them using geometric descriptive language. when a library part or parametric object is placed into the hbim, it is placed as an icon in 2d in the floor plan position (separated by height or formation levels) as detailed in a. and determined along the x, y-axis and in section and in elevation, on the z-axis. the library objects are not plotted directly in 3d environments, with the result that the objects are not lifted and placed within the 3d point cloud. the survey data can supply floor plans, whereas rectified images can detail elevations and sectional cuts as a map for location of library objects. further interrogation of survey data supplies numeric values for formation values (for z-axis location) and parametric values for the library objects themselves, these are recorded in data sheets. objects are also positioned onto the orthographic image in elevation and adjusted in side elevation and section for angular displacement. in figure 10, part of the process for mapping objects total station and image-based surveys is illustrated. the position of the slabs are detailed in a. and shown in 3 dimensions in b. the doric columns are chosen from library parts in c. and placed in floor plan in d. finally e. and f. illustrate the completed 3d hbim model. _______________________________________________________________________________________ geoinformatics ctu fce 2011 164 figure 18: hbim process 4. survey: karolinum prague charles university was founded in 1348 by king charles iv and is believed to be the oldest university in central europe. wenceslas iv bought a site containing a few houses which were rebuilt for the university and renamed the site karolinum. pre-university gothic structures include the chapel of saint cosmas & damian and an oriel window protruding from the southern wall (also visible in figure 11). the complex went through different architectural epochs during its existence; in the early 18th century it was rebuilt in baroque style then in the late 19th century some parts were rebuilt in neo-gothic style. the adjacent red brick building was considered for the hbim model. as with the the national monument in edinburgh the coordinates were directly obtained and recorded using a leica tsr1205 reflectorless total station. a baseline was setup along two control stations from which an arbitrary coordinate system was chosen. all detail points were recorded from at least one control station (figure 11). when no obstruction occured detail points were observed from both stations and compared. a typical error of less than 10mm and 20mm in the horizontal and vertical plan respectively occurred. however the restricted working space for positioning in potential control station locations around the karolinum building resulted in a poor geometry when recording building detail at higher elevation and this greatly affected the accuracy. _______________________________________________________________________________________ geoinformatics ctu fce 2011 165 figure 19: building being surveyed from the first control station [8] below is the hbim model based on the survey data and built with a similar methodology to the national monument edinburgh, a. represents the front view of the building, b. is an isometric view, c. d. and e. represent 3d cuts for analysis of the building. sections of the building are represented in details g and h and finally details f and i represent 3d elements from the building. figure 11: hbim of karolinum building _______________________________________________________________________________________ geoinformatics ctu fce 2011 166 5. survey: henrietta street dublin as the first step in extending the project of recording european historic structures, in may 2010, a group of north american purdue university students were introduced to the current conservation project, laser scan surveying and hbim of henrietta street. this is one of dublin‟s earliest georgian streets and is still intact, the street was constructed between 1730 and 1820. a survey was conducted to gauge student perception of henrietta street in regard to its history and of the recent renovation project. the students responded that they were very impressed with the detailed planning, care, and patience taken to correct the structural problems with minimal disruption to the interior finishes of the building. students also stated that there was a lot of value in renovating henrietta street, which represents some of the oldest existing georgian architecture in dublin. for the purdue students henrietta street offers a learning environment for understanding historic preservation, the next stage of this collaborative project with purdue is the use of the hbim models for remote student learning in their university environment. in figure 13 the hbim process of the facades of one of the buildings is illustrated starting with the laser scan of the buildings façade and the 3d model and automated drawings. figure 12: hbim process henrietta street 6. conclusion theoretical design framework for a virtual learning environment teaching and training architectural, engineering, construction and surveying students using computer simulations of buildings although recently developed is not new. in the case of this project a virtual learning environment is being developed which combines advanced recording and surveying with building information modelling (bim) to simulate and analyse existing buildings. in summary this learning software uses a purpose built plug-in library of parametric objects representing intelligent building components which are plotted within a virtual learning environment by the student onto a geometric framework based on laser or image surveys. the library of parametric objects contain the real world geometry, texture and specifications of building parts allowing the student to virtually analyse and experience _______________________________________________________________________________________ geoinformatics ctu fce 2011 167 different forms of architecture and structures. the parametric library parts are scripted using geometric descriptive language (gdl), which is an open scriptable language. gdl is an embedded programming language in archicad (a building information modelling software platform), which provides access to create and model parametric objects [9]. if building information modelling (bim) is incorporated into the design of virtual learning, it offers a very different experience from classroom-based learning. when interacting online individual students have their own perspective and experiences whereby they construct their own interpretations of the knowledge [10]. this was exploited in the design of learning software; students will be encouraged to construct their own interpretation from the simulation of realistic scenarios of historic construction process thus improving the learning outcomes. in the case of this project the objective is to create a web based learning software platform accessible from desktop personal computers, laptops and hand held devices, to enable individual and group based student learning. the learning software will simulate realistic case study scenarios based on site surveys of the historic construction process (how historic buildings are designed and constructed), thus bringing the real world into a virtual classroom. the learning software will be developed on the principals of instructional design, which focuses on the conditions in which learning will occur, and the principles of how people learn. a study in the civil and environmental department at worcester polytechnic institute confirmed that the use of bim facilitated effective learning mainly because it involves sharing, communicating, and group problem solving. it also helps students to actively engage in the process of planning, designing, and interpreting construction related data. moreover, the concept represents an invaluable tool to teach students the notion of cooperative work [11]. developing a web based learning platform using virtual models integrates technology into learning, promotes active learning, and develops an affective module [12]. the learning software will be developed to ensure the vle is created as a means of educating the students and regular evaluation of the module delivery will ensure the learning software does not become just another mode of lecturer delivery. the number of technology tools (hbim, laser scanning) being applied in various contexts will help to ensure the vle is an active system of education [13]. the integration of both learning web based technology and hbim is a new and innovative aspect in the field of virtual and computer based learning. there are three levels in the design of the virtual learning environment: level 1 – this is an introduction to the hbim platforms using google sketch-up and archicad software platforms. initial self-assessment identifies at what level a student should access the learning environment; this is built into tutorials to encourage the student to assess their entry level and encourage the student to revise and self-assess their work prior to moving to higher levels in the software. figure 14, below is a sample of students work using google sketch-up. figure 13: a sample of students work level 2 – students are introduced to the library of architectural elements and how to build and script these objects using geometric descriptive language (gdl), which is an open scriptable language that can be used to create parametric objects. gdl is an embedded programming language in archicad, which provides access to create and model parametric objects. the parametric objects are the components that the student brings together to form the entire building within a virtual environment. figure 15 describes an example of a gdl script to form a doric column. _______________________________________________________________________________________ geoinformatics ctu fce 2011 168 figure 14: gdl scripting level 3 – the students are introduces to the principles of building historic building information models based on the live site survey data and information sharing is introduced using student teacher web-based interaction inside the virtual learning environment [14]. student teacher web-based interaction inside the virtual learning environment using archicad bim server™ allows student and teacher to collaborate in real-time on bim models through standard internet connections from virtually any location. the bim models are located on a single server and accessed by the student and can be observed and assessed by the teacher as the students work through the virtual building. the learning software and virtual building models can be accessed by the student on the internet through pc, laptop and hand held devices allowing participation and support for traditional and non-traditional learners. figure 15: web interaction _______________________________________________________________________________________ geoinformatics ctu fce 2011 169 in conclusion, the initial testing of the new concept of hbim based virtual learning was evaluated through examining the students experience in using both the process and the software in creating the historic building information model (hbim). the evaluation process was conducted through interview and presentation carried out by each student. the students did experience the full hbim process, their reaction to each stage of the process differed; in stage one the students worked off virtual documents and had not as yet visited the site and this affected their enthusiasm and the accuracy of their work. the second stage where the students visited the site resulted in raised student motivation in carrying out the on-site survey, as this was a real experience for them to visit the site. finally, in the creation of the hbim models, the students found the software platforms were not totally compatible (from point cloud and survey data to 3d hbim). in addition, the building of the library objects in gdl tended to be beyond the scope of the students. all of these learning constraints identified by the students are now informing a redesign of the hbim virtual learning tool as part of the ongoing development. these modifications are been paralleled by research into the possibilities of including automated assessment in the hbim virtual learning tool. it is proposed to extent this project to other third level institutes and to explore a joint european and north america funded project. 7. references [1] murphy, m., et al.: historic building information modelling (hbim), structural survey, journal of building pathology 2009, vol 27; number 4, 311-327, emerald group publishing limited. [2] murphy, m., et al.: correlation of laser-scan surveys of irish classical architecture with historic documentation from architectural pattern books, the vii international, interdisciplinary nexus conference for architecture and mathematics san diego, california, usa, june 2008. [3] murphy, m., et al.: the processing of laser scan data for the analysis of historic structures in ireland. in: eds. ioannides, m., arnold, d., niccolucci, f. and mania, k. 37th cipa international workshop dedicated on e-documentation and standardisation in cultural heritage and 7th vast international symposium on virtual reality, archaeology and cultural heritage. nicosia, cyprus, 30th oct 4th november 2006, 135139. [4] murphy, m., et al.: parametric vector modelling of laser and image surveys of 17th century classical architecture in dublin. in d. arnold, f. niccolucci, a. chalmers (editors), vast 2007: the 8th international symposium on virtual reality, archaeology and cultural heritage nov. 27 29, 2007, brighton, uk. [5] murphy, m., et al.: an evaluation case study a historic building information model (hbim) of the leaning tower of pisa, iceri 2009 conference programme, edited by international association of technology, education and development, iated, madrid, 2009. [6] irvine, w. h., mclennan, f.: surveying for construction, mcgraw hill education co., 2006. [7] uren, j., price, wf.: surveying for engineers, fifth edition, palgrave macmillan, 2010. [8] kelly, a., et al.: hbim of karolinum (prague), dit module project, dublin institute of technology, 2011. [9] murphy, m., et al.: a flexible web based learning tool for construction and surveying students using building information modeling and laser scanning, iceri2008 conference programme edited by international association of technology, education and development, iated, madrid, 2008. [10] abrami, p. c., bures, e. m.: computer supported collaborative learning and distance education, the american journal of distance education, 10 (1996), 37-42. [11] salazar, g. et al: the building information model in the civil and environmental engineering education at wpi, asee new england section 2006 annual conference. [12] gagne, r., et al.: principles of instructional design, 5th ed thomson wadsworth, usa, 2005. [13] eisenstadt, m., vincent, t.: the knowledge web: learning and collaborating on the net, uk: kogan page, 2000. [14] salmon, g.: e-tivities: the key to active online learning, london, routledgefalmer, 2002. _______________________________________________________________________________________ geoinformatics ctu fce 2011 212 determination of st. george basilica tower historical inclination from contemporary photograph bronislav koska czech technical university in prague, faculty of civil engineering thákurova 7, prague 6, czech republic bronislav.koska@fsv.cvut.cz keywords: photogrammetry, bundle adjustment, datum problem, free network adjustment, covariant matrix abstract: a large amount of photographic material has been accumulated from the photography emerge in the nineteenth century. the most photographs record portraits, urbanistic complex, significant architecture and others important objects in the photography inception. historical photographs recorded a huge amount of information, which can be use for various research activities. photograph visual information is sufficient in many cases, but accurate geometrical information must be acquired from it in specific situations. it is the case of long-term stability monitoring of buildings in the prague castle area see [1] . for static analysis in the monitoring project, it is necessary to determine accurately specific geometrical parameter – mutual angle of st. george basilica towers in the north-south direction before the reconstruction started in 1888. the angle standard deviation must be solved as well. the task demanded using of photogrammetric methods. own implementation of general bundle adjustment had to be created to fulfill determination of reliable standard deviation of the angle, because standard photogrammetric software does not have all the necessary options. 1. introduction the paper deals with determination of st. george basilica towers historical angle in the north-south direction from contemporary photographs. st. george basilica has been founded in 920 ad and it is the oldest surviving sacral building in prague. basilica 41 meters height towers have been added after fire in 1142. the basilica is situated in the prague castle area. present inclination of the north tower is about 0.7 meter to the north direction on the top. st. george basilica, alike others important building in the prague castle area, is under long term stability monitoring using standard surveying techniques (precise levelling, total stations) see [1]. the last significant basilica reconstruction started in 1888. changing of the main nave construction was a part of the reconstruction and it could have affected the towers inclinations. therefore it was decided to determine the north-south towers angle before the year of the reconstruction beginning. a historical photograph can be used for the purpose, if it is joined to the present photogrammetric project. figure 1: j. eckert (1872) _______________________________________________________________________________________ geoinformatics ctu fce 2011 213 2. historical photographs two usable photographs were found. both of them were taken from the st. vitus cathedral tower, which was confirmed during project processing see figure ř. it wasn‟t possible to get original negatives, but good quality historical positives were found in both cases. the first photograph was made by j. eckert in 1872 (the photograph is further named eckert). its historical positive (257x185mm) is deposited in the prague castle archive. the archive also has medium format dispositive (see figure 1) made by a professional photographer. the original glass plate isn‟t preserved. the medium format negative was scanned on the professional negative scanner nikon super coolscan 8000ed using resolution 4000dpi. there exists a range of various transformations between original glass plate and final scan, which has influence on its accuracy (original glass plate – photograph – diapositive of photograph – scan). standard deviations and residuals from processing on the photograph shows that all named transformations were realized in high quality manner with low influence on the image geometry. the second historical photograph was made by f. fridrich in 1867 (see figure 2, the photograph is further called fridrich). it is a stereo–photo and its negative isn‟t preserved, but good historical positive is placed in the private archive of photographer p. scheufler. the positive was scanned on high quality scanner. figure 2: f. fridrich (1867) 3. measuring for the orientation purpose, it is necessary to identify as many as possible to present-day preserved points on the historical photograph. the most efficient method to fulfill the task is using present-day photogrammetric project, because it is possible efficiently measure large areas (areas presented on figure 1 and 2) by photogrammetry, whereas using standard surveying techniques would be much more costly. the other reason for using photogrammetry is necessity of identification of the same points in the historical photograph and in the present-day reality. it is much easier to carry out the identification in the similar photograph to the historical one (similar inner and outer orientation) than in the different view from the terrestrial trigonometric measurement. it would be suitable to realize camera positions and orientation in configuration presented on the figure 3. the only realizable possibility to accomplished suggested configuration is taking pictures from aerial model (further called unmanned aerial vehicle – uav). _______________________________________________________________________________________ geoinformatics ctu fce 2011 214 figure 3: suitable camera positions and orientation 3.1 present uav photographs the model easyuav (see figure 4) was used for taking photographs. easyuav is equipped with autonomous control of flight and with programmatic shooting of photographs – both, flight and shooting, can be planned before flight. figure 4: picture of the easyuav model the most important parameters of easyuav model are presented in the table 1. _______________________________________________________________________________________ geoinformatics ctu fce 2011 215 wing span 1.3 m length 1.3 m weight (without camera and battery) 700 g maximal laden weight 1 150 g usual load 450 g typical flight time 15 – 75 minut speed typical / maximal 40 / 90 km/h flight range radius 5 km table 1: basic parameters of the easyuav model the special lightweight camera sony nex-5 was chosen for the model. it is a representative of a new camera category (in 2010), which combines size and weight of “compact camera” category and some important properties of reflex camera. it incorporates concretely larger size ccd chip of the aps-c standard (23.4x15.6 mm in the case) and interchangeable lenses. the fix focal length lens 16 mm was used in the project because of its lightweight and geometrical stability. other important property of the camera is 14 mpixels resolution and full manual control possibility. the flight trajectory and shooting position for the project was defined in google earth software and transformed to the easyuav control software in advance (see figure 5 left). figure 5: flight trajectory and shooting position plan and an example of realized photography 95 photographs were acquired in total. the whole flight time (from takeoff to landing) took approximately ten minutes. one photograph with similar orientation as historical one is for illustration on the figure 5. it is evident that for higher accuracy/resolution of photographs shorter shooting distances should be used. flight trajectory was suggested with respect to safety aspect (flight deviation from planned trajectory) at least 20 meter above highest point in the prague castle area – south tower of st. vitus cathedral. the new flight was not realized for time reasons and more photographs that are accurate were acquired from terrestrial positions instead. 3.1 present terrestrial photographs some additional photographs were also acquired from outer gallery of st. vitus cathedral and from terrestrial place in front of st. george basilica to obtain higher accuracy of photogrammetric project. calibrated digital reflex camera _______________________________________________________________________________________ geoinformatics ctu fce 2011 216 canon eos500d with fix focus lens 35 mm were used for the task. an example of terrestrial photograph is on the figure 6. 4. processing it is possible to use two approaches for processing. the situation is easier if we only need an evaluation of historical angle. in the case it is possible to use standard photogrammetric software e.g. photomodeler. however, if we also need reliable estimation of historical angle standard deviation, we have to use more complex approach which includes creation of own bundle adjustment (further is used the abbreviation ba) implementation with more possibilities that is available in standard software. 4.1 processing in the photomodeler software the first processing was realized in the photomodeler software (further named pm). its results could be used for determination of historical angle and was used as approximate values for own ba implementation. the present-day photographs (11 from easyuav and 8 terrestrial) were chosen in the first stage. transformation was used for setting of project orientation and size. axis z was put to the southwest edge of south tower, which is almost perfectly vertical (see [1]), and axis x is placed to the plane defined by points 42, 43 and 44, which represents west walls of towers, see figure 6. thanks to the axis orientation only xz coordinates determine the north–south angle and y coordinate does not have to be used in further calculations. figure 6: transformation for setting of project orientation and size the basic statistical information about the project in pm are: standard deviation a posteriori σ0 (total error in pm) 0.759 (a priory 1 pixel), 73 3d points, 3d point were marked on 5 photographs in average, there were 20 points marked on photograph in average. the historical photograph was joined to the project next. the experience with the historical photograph eckert is described in the following text. inverse camera (focal length, principal point and aspect ratio) was set for it. image points were identified on the historical photograph besides towers area. 13 points were identified see figure 7. it can be supposed that the chosen points were stable or nearly stable in its position between 1872 and present day. σ0 increased to 0.944 pixel after processing together with historical photograph. standard deviation of the historical photograph was 2.52 pixels (maximal 5.8). the historical photograph has higher standard deviation, because it is uncalibrated image after range of transformations and because of its higher resolution 5978x4298. the next step was fitting of a surface through the west walls of towers in the present-day photographs and marking points on edges of the historical photograph. the only southwest edge of the south tower (further named h1 – in present-day is nearly perfectly vertical) and northwest edge of north tower (further named h2) were used because of its higher quality for identification. it is easy to compute mutual angle in north-south direction of both towers in 1872 from the points on h1 and h2 according to the equation (1) – point indexes used according to the figure 7 and axes orientation according to the figure 6. _______________________________________________________________________________________ geoinformatics ctu fce 2011 217 102,103 96,97 102,103 96,97 arctan arctan x x z z (1) the angles and its deviations are stated computed for the height 15 meters (seeable part of towers is 14–17 meters) for the reason of transparency in the text. the historical mutual angle α recomputed for 15 meters is 0.302 m derived from the eckert and 0.248 from the fridrich. more complex task is estimation of the angle deviations, which is described further in the text. figure 7: choosing of identical points on historical and present-day photographs the spin-off of the historical photograph orientation was determination of its position during photographing in 1872 see figure 8. the meaningful position is also a control of gross errors in project orientation. figure 8: position and orientation of historical photograph shown in present-day photographs _______________________________________________________________________________________ geoinformatics ctu fce 2011 218 4.2 methodology verification using standard surveying methods an independent verification of angle α derived from the present-day photogrammetry project is comparison of its value with the results from standard surveying techniques described in [1]. there is stated α angle 0.351 m for 15 meter in the literature [1]. this value is not tightly in correspondence with angle from photogrammetry, which is 0.254 m. there exists an objective reason for the discrepancy. the angle was derived only from top 7 meters below the roof in [1] and the edges of towers are not perfectly linear. the special targets were placed on the towers for the measurement see figure ř left. that‟s why there were added points 72 and 73 in photogrammetric project too see figure 9. when the angle between new points and the top of the towers was computed, very good correspondence was achieved. the angle value from photogrammetry is 0.341 m. figure 9: points 72 and 73 near standard surveying points for inclination determination 4.3 estimation of angle standard deviation the most complex problem of the project is estimation of determined angle standard deviations. there exist three essential reasons why the pm software and similar ones cannot be used for the purpose. at first, it is not possible to get full covariant matrix of adjusted values from it. points related to individual photographs are very closely correlated and that‟s why relations between those points have significantly lower standard deviation than is their total standard deviation. the second reason is that there is not any possibility to get standard deviation of points computed from intersection of surface and optical direction from the camera in the pm software, what is the case of tower points in historical photographs. these standard deviations can be generally computed and depend on the results of adjustment. the last reason is that it is not possible to use different accuracy (weight) for individual images in the pm project. it is evident that historical photographs have lower accuracy than present-day ones, because it is uncalibrated and influenced by range of transformations. the pm software shows standard deviations for adjusted parameters (camera orientation and position and 3d points). these standard deviations are “absolute” related to the project coordinate system, so the y tightly depend on project coordinate system definition. this problem is called the “datum” (or “gauge”) problem and many materials are dedicated to it e.g. [2, 3]. for example, if the most general method “free network method” is used in our project (in pm – processing without constraints and control points) and the law of error propagation [4] is applied on the equation (1), the standard deviation of angle σ α is 0.618 m. if the full covariant matrix is used in the same example, the standard deviation of angle σ α is only 0.012 m. if the datum is defined by 7 coordinate conditions – two points on h1 and y coordinate of point on h2 are chosen to be fixed (it represents ideally chosen datum fixation for minimal standard deviations), the standard deviation of angle σ α is 0.024 m. stated deviations are related to the present-day angle and project. if full covariant matrix is used in the law of error propagation, the results of adjusted parameters functions are invariant to the datum choice. it is necessary to use full covariant matrix to get reliable estimation of σ α. the second aspect is determination of historical angle standard deviation. as was said before, there is not possibility to compute standard deviation of points computed from intersection of surface and optical direction from the camera in the pm software, but it is possible in general. a plane can approximate the west tower walls. a small movement of the plane in the west-east direction cannot significantly influence angle α so, it is not necessary to assign any standard deviation to the plane parameters and they can be treated as constants (standard deviation can be assign to them if it is necessary). it is possible to define simple constrain yi= 0 for points i = 96, 97, 102, 103 (see figure 7 upright) because of suitably chosen coordinate system see figure 6. bundle adjustment (further named ba) with the constrains give a covariant matrix including points 93, ...,103 and can be used for σ α computation. the last problem with processing in pm is impossibility of using different weight (standard deviation) for individual photographs. there is set a default standard deviation one pixel for manual marked points in an image. it is reasonable choice for calibrated photographs. a historical photograph is not calibrated and it is influenced by range of transformations, that‟s why the default value of standard deviation doesn‟t seam appropriate for it. estimation of its true value is very complicated, but _______________________________________________________________________________________ geoinformatics ctu fce 2011 219 the photograph standard deviation from ba can be used. the present-day photographs have σ0 in range 0.30-1.26 pixel with mean 0.759 and σ0 of eckert is 2.52 and of fridrich 1.86 in pm. 3d points were computed from five present-day photographs in average, that‟s why σ0 a posteriori of historical photograph can be used as a reasonable estimation of its accuracy. the only possibility how to use it in pm is reducing the resolution of historical photograph, but it is not useful, because point marking would be less accurate then. covariant matrix of adjusted parameters in ba has an equation (3). 2 1 0aprin inp s (2) 1 12 1 1 1 1 1 1 0 ( ) ( ) t t t t t t n n n in in in in in in r p r r p r r s r s n j p j j p j j s j (3) n is a number of redundant measurement, pin is weight matrix and other parameters are described later in the text. from the equation (3) is clear that the covariant matrix s is defined by jacobian matrix j (spatial configuration of the project), covariant matrix of input values sin (marking of points) and by a posteriory standard deviation σ0. s is not influenced by actual residuals, besides σ0 (in the case of using a priory σ0, it is not influenced by residual at all). the only changeable parameter influencing s matrix is sin matrix and if this matrix is not chosen right, the resulting accuracy characteristics will not be reliable. for illustration, in the case of using wrong (default 1 pixel) standard deviation for historical photograph we get σ α 0.044 m for eckert (0.038 for fridrich). in the case of using reasonable value 2.5 pixel for eckert (2 for fridrich) we get σ α 0.061 m for eckert (0.071 for fridrich). 5. results the main result of the project is determination of historical angle between st. george basilica towers and its standard deviation. angle α recomputed for 15 meters is 0.302 m derived from the eckert and 0.248 from the fridrich. its standard deviations σ α is 0.061 m for eckert and 0.071 m for fridrich. the present-day angle is 0.254 m with σ α 0.012 m. it is clear that angle change cannot be proofed from the presented results. change would be provable if its value is higher than 0.061x2 = 0.122 m. computed change is only 0.048 m (eckert). 6. summary determination of historical angle in the north-south direction between st. george basilica towers from contemporary historical photographs is presented in the paper. at first, the suitable historical photographs had to be found in the archives. next, taking of present-day photographs from proper positions had to be realized. the photographs were taken using small uav model and from neighbouring building. the determination of historical angle was pretty straightforward. the standard project in photomodeler software was sufficient for the task. the historical photograph was included in the project with inverse camera setting. the most demanding part was determination of angle standard deviation. standard photogrammetric software like photomodeler can‟t be used for the purpose for several reasons. it is not possible to get full covariant matrix from them, they don‟t give accuracy characteristic for intersection of surface and optical direction from the camera and it is not possible to use different weight for individual photograph in it. that is why own bundle adjustment implementation was created, which enables reliable estimation of the angle standard deviation. 7. acknowledgements this research has been supported by msm 6840770005. 8. references [1] procházka, j., záleský, j., jiuikovský, t., salák, j.: long-term stability monitoring in the prague castle area , acta geodynamica et geomaterialia. vol. 7, no. 4, p. 411-429, 2010. [2] mcglone, j. ch. (editor), mikhail, e. m. (editor), bethel, j. s. (editor), mullen, r. (contributor): manual of photogrammetry – fifth edition, asprs, 2004, isbn 1570830711. [3] triggs, b., mclauchlan, p., hartley, r., fitzgibbon, a.: bundle adjustment a modern synthesis, vision algorithms theory and practice, springer, vol. 34099, p.: 298-372, 2000, isbn: 3540679731. available online: [4] koch, k-r.: parameter estimation and hypothesis testing in linear models, springer, germany, 1999. http://www.springerlink.com/index/plvcrq5bx753a2tn.pdf variation detection and respondents’ feedback: the cause, effect, and solution of oil spills variation detection and respondents’ feedback: the cause, effect, and solution of oil spills ayodele sunday tologbonse∗ and esther oluwafunmilayo makinde laboratory for geospatial research, department of surveying and geoinformatics, faculty of engineering, university of lagos, akoka lagos state, nigeria ∗corresponding author: ayotologbonse@yahoo.com abstract. centred on occurrences of pipeline explosion and oil spills in a host community; a supervised classification technique, of land use/land cover variation detection was carried-out, with landsat imageries of three time intervals, to determine the percentage of variation between the time intervals. also carried-out, was a random sampling of questionnaires; dispatch to acquire respondents’ feedback. it addressed respondents’ demographic and social-economic composition of the sample population, the perception on the cause and the impact, and the effect of the oil spill and finally considered the possible solutions. information was subjected to descriptive analysis and an f-test statistical analysis in a 95% confidence interval. reports showed that land use/land cover classification had undergone series of percentage variation within the time interval considered, indicating ‘remarks’ of a rise or a decline. while, the measure of insecurity (of about 36.7%) is a prevailing element to the unceasing attack on oil pipelines and only a sustaining security measure (of about 40.8%) will evidently pave a wayout. wherefore advocating for community based policing, and a comprehensive technological sensor system, for monitoring of oil pipelines/facilities across the nation. keywords: land use/land cover variation; respondents’ feedback; test hypothesis. 1. introduction over the years, the amount of oil produced and transported between points of production, processing and distribution or export terminals has greatly increased as the demand of and dependence on oil increased. it has also been observed that thousands of barrels of oil have been spilled into the environment through storage facilities disaster and mainly oil pipelines in nigeria [8]. 1.1. nigeria’s oil economy nigeria joined the league of oil producing nations on august 3rd, 1956 when oil was discovered in commercial quantities and in africa today, it ranks as the leading oil and gas producer of all time [16]. it’s the eleventh largest in the world [8]. nigeria is the most populous nation in africa with four hundred and forty million peoples as declared in the national census in 2006. the attending of oil in commercial quantities in nigeria, signalled the beginning of a profound transformation of nigeria’s political and economic landscape [16]. the strength of nigeria’s economy is the petroleum sector, contributing about 90.0 % of the nation’s foreign exchange earnings and about 25.0 % of the gross domestic products. a significant proportion of the nation’s oil is produced onshore and is subsequently transported by pipelines [8] and the pipelines security has become a great challenge to the nation, especially to the several host community harbouring such national assert. as it is well-known, energy geoinformatics fce ctu 17(1), 2018, doi:10.14311/gi.17.1.1 1 http://orcid.org/0000-0001-5857-1349 https://doi.org/10.14311/gi.17.1.1 http://creativecommons.org/licenses/by/4.0/ a. s. tologbonse and e. o. makinde: the cause, effect, and solution of oil spills plays a strong role in the economic, socio-political and environmental spheres of every nation and its importance can be seen in every facet of life and energy generation is largely dependent on petroleum products [13], although there is some contribution from hydropower, biomass and coal. according to [2], petroleum consumption has been on the increase in nigeria since the early 1980s. this upward trend is evidenced in the energy consumption figures of 2006, 2007 and 2011 where petroleum products represents 53.0 %, 67.3 %, and 68.5 % in that order of the total energy consumed in the nation [7]. 1.2. societal effects of oil spills the nation has suffered the negative environmental effects of oil development ever since its discovery. in recent times, the development of the nation’s oil industry, combined with a population increase and a lack of implementation of environmental regulations has led to numerous self-inflicted damage to its environment [4]. meanwhile, the occurrence of oil spill is due to a number of causes which include the following: corrosion of pipelines, sabotage and oil production operations [5]. sabotage and oil siphoning has become a leading cause of oil spills in nigeria [18] and being a major issue, as well contributed to further ecological degradation [5]. according to [11] damaged oil pipelines may go undetected for days, and repair takes even longer period. as a result oil tapping has become a big business, with the stolen oil swiftly making its way into the black market. the grounds of pipeline damage and leakage can differ greatly varying from material failings and pipe corrosion to ground erosion, tectonic movements on the sea bottom and contact with ship anchors and bottom trawls particularly in the offshore operations while pipeline vandalization is observed as the significant cause of pipeline damage onshore in nigeria [8]. this dangerous act of vandalization has led to several incidences of oil spill damaging vital national asserts and the ecosystem at large. experience has shown that oil spill into the environment holds negative consequences, such as the problem of air pollution and vegetation loss; including aquatic habitat shrinkage and depleting of the soil nutrient component [8]. whatever the angle oil spills is viewed, its outcome is an evidential threat to human health including hazardous effects on lands, water bodies, and vegetation’s, swamps, marine environs in the affected host communities in nigeria [12]. key oil spills has attracted international attention and created awareness due to the associated ecological, human and conservational risk and damage that result from such spills. the common causes of oil spills are oil blowouts from the flow stations, equipment failure, and leakages from aged and corroded network of the pipelines, operational disaster, maintenance blunder, sabotage, bunkering and oil theft operations [12] and meanwhile, the well-endowed ecological resources are destroyed. in reaction to this [14] noted that, the petroleum act and the oil pipeline act demands that affected residents be compensated for all intangible sociocultural and health environmental assets lost and market related real estate (for example: land, buildings, plant and machinery, severance, injurious affection, disturbance) to mention but a few [6]. well-known, is the lack of enactment of these laws in protecting the environs and its populace. ultimately, these incidences continue to reoccur and there effects are well patent in the society. geoinformatics fce ctu 17(1), 2018 2 a. s. tologbonse and e. o. makinde: the cause, effect, and solution of oil spills 1.3. variation detection technique land use/land cover variation is one of the major driving powers of international conservational change, and central to the sustainable development debate and it adds to the main challenges that impact the original landscape [15]. report [23] showed that, these variations, brings drastic impact on the physical and social environments which have been researches central point of reference. these spam around its impact on the capability of natural systems to support livelihood [25], biodiversity [17], soil degradation [26], and water quality, land and air resources, ecosystem and climate [15]. furthermore, [1] explained that, the land use/land cover variation of a region is the result of relationship of both natural and socioeconomic factors. issues of land use/land cover and its effect on ecological sustainability and human wellbeing has become of great worry all over the world. variations in the land use/land cover arrays impact significantly on local and global environmental conditions as well as economic and social wellbeing. understanding how the arrays are influence by these factors would provide new proportions to policy making and public policy assessment. variation detection is the method of determining differences in the state of an object or entity, phenomenon or occurrence by observing it at different epochs or times. it’s an important procedure in monitoring and managing natural resources and urban change. these provide quantitative assessment of the spatial range of the area of interest (aoi). meanwhile, the variation detection is described in four important categories; detecting the variations that have occurred, identifying the nature of variation, measuring the range to the variation degree, and analysing the spatial range of variation [1]. 2. procedure a geospatial technique was used to analyse the classification variation in determine the effect of oil pollution to communal life, the environment and it’s asserts. and to proof the outcomes that surrounds oil pollution through the medium of respondents’ feedback. wherefore, the procedure undertaking in achieving the needed outcome is outlined accordingly. 2.1. land use/land cover variation the supervised classification method was used to identify land use/land cover classes of built/developed region, vegetation/forest region, bare/undeveloped region and water/river region. this well describes the variation that emanated as a result of the landsat imagery time intervals considered. meanwhile, the spatial range of each class was calculated in square kilometre and presented in percentage values (%). the percentage variation to determine the ‘variation degree’ was calculated [20]. p = {(a − b) /b}× 100 (1) where p = percentage variation of land use/land cover for a particular purpose within a specified time interval. geoinformatics fce ctu 17(1), 2018 3 a. s. tologbonse and e. o. makinde: the cause, effect, and solution of oil spills a = region under that particular purpose of land use/land cover after the time interval. b = region under that particular purpose of land use/land cover before the time interval 2.2. classification accuracy analysis accuracy analysis reports the supervised land use/land cover classification of the time intervals of the landsat imagery under consideration (2011, 2014 and 2015). it’s a procedure for quantifying how good a job was done by a classifier and how accurate classification was. accuracy analysis is an important part of classification. it compares the classification product with a reference data that is believed to reflect the true land use/land cover accurately. the source of reference data was the landsat imagery. figure 1, showed a print screen of the accuracy analysis. figure 1: print screen views of accuracy analysis procedure on erdas imagine. 2.3. respondents’ feedback the procedures undertaken include: sample size population = {(t n/p s)}× 100 % (2) = {(147/609.173)}× 100 % (3) = (0.024%) (4) where t n = total number of questionnaires. p s = population size of ojo local government area, according to the national census of 2006 in nigeria. geoinformatics fce ctu 17(1), 2018 4 a. s. tologbonse and e. o. makinde: the cause, effect, and solution of oil spills the total number of 147 questionnaires was randomly sampled in the host community. a total of twenty questions was outlined, four sections in all, with the appropriate question for each section. the open ended option was adopted which helped streamline respondents’ thoughts. question boarders on the demographic and social-economic composition of the sample population, perception on the cause of the oil spill, perception on the effect of the oil spill, and finally considered the possible solutions to the menace. the questions were aroused from perceived issues, reconnaissance and information gathered, and consultation from a number people within and withal the host community. 2.4. test hypothesis the information of the respondents’ feedback was subjected to: descriptive analysis, and f-test statistical analysis. 3. analysis and discussion this identifies in detail the intended results analysis, presentations, and discussion. 3.1. variation report the analysis identified the ‘variation degree’ that occurred at the time intervals considered. the interval checks carried out were from 2011 to 2014 and 2014 to 2015. this were necessary to ascertain the differences the land use/land cover had undergone in the course of such time and to reference the impact of oil spills over such time intervals, as it affects the “subject matter” in line with the host community and environs. the variation analyses are as follows in table 1, figure 2 and figure 3. table 1: variation degree analysis report of the time intervals. land use 2011 2014 2015 2011 to 2014 2014 to 2015 / land cover (%) (%) (%) (%) remarks (%) remarks built 44.0 64.0 69.0 45.0 a rise +8.0 a rise / developed region vegetation 32.0 29.0 26.0 -9.0 a fall -10.0 a fall / forest region bare 15.0 4.0 2.0 -73.0 a fall -50.0 a fall / undeveloped region water 9.0 2.7 2.5 -67.0 a fall 7.4 a fall / river region 3.2. accuracy report the analysis showed the accuracy totals (at), the overall classification accuracy (oca) and the overall kappa statistics (oks) for each of the land use/land cover classification. geoinformatics fce ctu 17(1), 2018 5 a. s. tologbonse and e. o. makinde: the cause, effect, and solution of oil spills figure 2: chart showing a broader view of the ‘variation degree’ of the time intervals. the overall classification accuracy for 2011 was 88.0 %, 2014 was 84.0 % and 2015 was 83.0 % respectively. the table presented with columns of class name (cn), reference totals (rt), classified totals (ct), number correct (nc), producers accuracy (pa) and users accuracy (ua). the analysis was as follows in table 2. table 2: land use/land cover accuracy analysis report. at cn 2011 2014 2015 rt ct nc pa(%) ua (%) rt ct nc pa (%) ua (%) rt ua nc pa (%) ua (%) built / develped region 51 43 43 84 100 48 58 47 98 81 55 71 55 100 77 vegetation / forest region 37 33 33 89 100 44 31 31 70 100 37 21 21 57 100 bare / undeveloped region 4 15 4 100 27 4 8 3 75 38 2 3 2 100 67 water / river region 8 9 8 100 89 4 3 3 75 100 6 5 5 83 100 total 100 100 88 100 100 84 100 100 83 oca = 88.0 % = 84.0 % = 83.0 % oks = 0.8141 = 0.7245 = 0.6782 3.3. feedback report analysis of respondents’ bio-data recorded the following. this is the demographic and socialeconomic distributions of the respondents. the background information of respondents was deemed necessary. the ability of the respondents to give satisfactory information on the study variables greatly depends on their background. the background information of respondents geoinformatics fce ctu 17(1), 2018 6 a. s. tologbonse and e. o. makinde: the cause, effect, and solution of oil spills figure 3: map showing an analysis of the ‘variation degree’ of the time intervals. solicited data on the samples and it’s presented in categorizes of: sex, age, education levels, marital status, income level and occupation (table 3). it indicated 57.8 % out of the total 147 respondents were male, while, 42.2 % were female. it showed that, 42.2 % were between the age of 15 and 25, 30.6 % 26 and 36, 19.0 % 37 and 47, and 8.2 % were 47 years and above. meanwhile, the married were of 51.0 %, single were 40.1 %, 4.1 % were divorce and 4.8 % were widows/widower. 48.3 % were into trading/business, 4.1 % were farming/hunting, 1.4 % was of the fishing occupation, and 8.8 % were civil servant, while, 37.4 % did not express their opinions. likewise, 13.6 % of respondents’ earn between 5000-10000, 6.1 % earn between 10000-15000, and 22.4 % earn between 15000-20000. but, 34.7 % earn an income of 20000 and above. responses of about 23.1 % were not indicated. then, 7.0 % of the respondents’ had qualification of primary school, 38.8 % secondary school, and 14.3 % tertiary respectively. while 46.3 % were of other categories. analysis of respondents’ perceived cause of oil spill recorded the following. this is the perception of respondents on the cause of oil spill. these delve in understanding the view of the respondents as it’s meant to assess the effect on the host community. the results presented reveals perceived cause, categorized into: the meaning of oil spill, the incident of oil spill if any, the time it occur, and the perceived cause (s) (tables 4). it showed that only 89.1 % could give a ‘yes’ assent to the incident of oil spill in the community, far above the 6.1 % who said ‘no’, while, 2.7 % were not aware. likewise, 51.7 % of the respondents’ believed the major cause of the oil spill was pipeline vandalization, 17.0 % pipeline leakage, 2.7 % went with leakage/spill from tanker, and 17.7 % poor maintenance procedure. basically, procedures from the respondents’ feedback gave a positive acknowledgement of geoinformatics fce ctu 17(1), 2018 7 a. s. tologbonse and e. o. makinde: the cause, effect, and solution of oil spills table 3: respondents bio-data variable options frequency (f) percent (%) sex male 85 57.8 female 62 42.2 age 15-25 62 42.2 26-36 45 30.6 37-47 28 19.0 above 12 8.2 marital statute married 75 51.0 sinlge 59 40.1 divorced 6 4.1 widow/widower 7 4.8 occupation trading/business 71 48.3 farming/hunting 6 4.1 fishing 2 1.4 civil servant 13 8.8 nil 55 37.4 income 5000-10000 20 13.6 10000-15000 9 6.1 15000-20000 33 22.4 20000 & above 51 34.7 nil 34 23.1 education primary 1 0.7 secondary 57 38.8 tertiary 21 14.3 other categories 68 46.3 total of each variable 147 100 the incident of oil pollution and its main cause. this was significant and pivotal to the development and analysis of this research paper, given a broader view to its content and results. such outcomes, helped to further boast upcoming contents, results, reports and analyses as the processes develops. analysis of respondents’ perceived effect of oil spill recorded the following. this is the respondent’s perceived effect of oil spill. the information of respondents was deemed necessary because the ability of the respondents to give satisfactory information on the study variables is of great importance to this paper, in analysing situation that surrounds the reasons for hypotheses and the presented categorizes which is: the degree of effect, areas it has affected, and the level of occurrence of disease, if there have been any (table 5). therefore, it revealed a 53.7 % of the respondent identifying the impact of oil pollution affecting the air, water, and land. this resulted also to neglect by other communities and the government, given a 1.4 % percentage value. environmental pollution of about 34.0 % was recorded, leading to series of features like; un-conducive environment for the populace (26.5 %), negative impact on health and life (12.9 %), loss of livelihood (6.1 %), insecurity increase (36.7 %), and several health issues (19.7 %). geoinformatics fce ctu 17(1), 2018 8 a. s. tologbonse and e. o. makinde: the cause, effect, and solution of oil spills table 4: respondents perceived cause of oil spill. has there been any incident of soil spill in the host community? what do you think is the cause of the oil spill? options response (%) options response (%) yes, there has been 89.1 pipeline vandalization 51.7 no, there hasn’t 6.1 pipeline leakage 17.0 really not sure 2.7 leakage/spill from tanker 2.7 poor maintenance procedure 17.7 table 5: respondents perceived effects of oil spill. how has oil spill affected the host community? what can be said about the effect in the host community? options options response (%) response (%) it has affected the air, water and land, thereby resulting in hardship 53.7 it has brought neglect by other communities 1.4 polluting the environment making it not conducive for our children to play 26.5 it has negatively affected our health and life 12.9 loss of jobs 6.1 insecurity 36.7 health issues 19.7 environmental pollution 34.0 analysis of respondents’ perceived solution of oil spill recorded the following. this is the perceived solution of the respondents to the causes and effects of oil spill. the breakdown on how oil spill could be tackled is very vital to this paper and researches at large. it also aided to assess the impact oil spill posed in entirety. the perceived solution information from the samples is presented in categorizes of; the inputs of individual, community, relevant authorities and the way-out in tackling the challenges. in addition, the perceived solution of the respondents’ to the causes and effects of oil spill, produced responses in table 6. it indicated that amongst all viable solution (way-out) 40.8 % of respondents’ said security agent should be found guiding the entire pipeline effectively. by this response, 17 % admonished the authorities to adopt a feasible plan for maintenance resolutions and 27.2 % said by privatizing the sector it will bring for proper supervision. meanwhile, 15.0 % resolve that, the community should be given the right to secure the pipelines, thus, bringing about employment opportunity to a line-up of unemployed youth. all this points are well effective and this adds to the ingredients’ for a better policy standards in the oil sector of a nation. geoinformatics fce ctu 17(1), 2018 9 a. s. tologbonse and e. o. makinde: the cause, effect, and solution of oil spills table 6: respondents perceived solution of oil spill. what do you think is the solution (way-out)? options response (%) security agent should be found guiding all entire pipelines 40.8 the authorities should adopted a plan for maintenance resolutions 17.0 the government should privatize the sector for proper supervision 27.2 the host community should be given the right to secure the pipelines 15.0 thereby, bringing about employment 3.4. hypothesis report the hypothesis report showed an f-test statistical analysis. this analysis is referred to as a reliability analysis frequency test which showed the mean statistics and the confidence interval at 95.0 %. five items of the research questions, which boarders on oil spill perceive causes, effects and solutions were identified and analysed. the significance and correlation of the research questions were tested for. these are presented in table 7 and 8 correspondingly. therefore, the f-test hypothesis is as follows: h0: the oil spill does not significantly affect the research area. h1: the oil spill does significantly affect the research area. table 7 showed the mean statistics of the selected research questions of the respondents that concerns the cause; effect and solution of oil spill in the research area (that is, item 1, 2, 3, 4, and 5). the grand mean for the five items was 1.93 and the variance was 17.383. table 7: items mean statistics. item research question mean std. deviation 1. has there been any incident of oil spill in the host community? 1.04 0.189 2. what do you think is the cause of the oil spill? 1.80 1.114 3. how has oil spill affected the host community? 1.96 1.170 4. what can be said about the effect in the host community 2.82 0.968 5. what do you think is the solution (way-out)? 2.04 1.043 total 9.66 4.169 variance = 17.383 n of items = 5 grand mean = 1.93 table 8 described the output of the f-test statistics analysis and whether there is a statistically significant difference between the group means. it is seen that the significance level was 0.000 (p = 0.000), which is below 0.05, the acceptable allowance. hence the null hypothesis (h0) is rejected and the alternate hypothesis (h1) accepted. it geoinformatics fce ctu 17(1), 2018 10 a. s. tologbonse and e. o. makinde: the cause, effect, and solution of oil spills table 8: f-test statistical analysis on items. item analysis sum of squares df mean square f sig. between people within people 469.351 135 3.477 between items 219.259 4.0 54.815 183.689 0.000 residual 161.141 540 0.298 total 380.400 544 0.699 total 849.751 679 1.251 intra class correlation confidence interval 95 % f test with true value 0 lower bound upper bound value df1 df2 sig. single measures 0.476 0.237 0.650 11.651 135 540 0.000 average measures 0.820 0.608 0.903 11.651 135 540 0.000 therefore, concludes that oil spill significantly affected the research area. df = degree of freedom. f = frequency. 3.5. discussion analysis based on findings testifies to the fact that over time intervals, land use/land cover are affected either positively or negatively [1]. with respect to oil spills and its challenges, this gives a worrying trend to the subject matter. this had majorly impacted the environs and its ecosystem negatively. meanwhile, the results of the landsat imageries used showed series of variation in percentage values of the entire land use/land cover classification (table 1). specifically, vegetation/forest and water/river had undergone series of declining changes from the time interval under consideration, which could be attributed to the impact of oil spill in the environment, notwithstanding other factors (figure 2). in view of this, [10] had reported that oil spill affect vegetation, water and the impacts of small spills can send ripples into surrounding ecosystems and affect communities beyond the immediate spill area. clearly, the vegetation/forest, bare/undeveloped and water/river regions has the worst hit when compared to built/developed region; especially, the vegetation/forest region. most were burnt off as result of oil spill fires and are gradually changed by light forest. similar reports of [18], [20] and [21] testified to these specifics. also, the analysis from water/river region showed a fall in the specifics within 2011 to 2014 by 67.0 % and slight within 2014 to 2015 by 7.4 %. this could be as a result of its nature of movement which can credited to either natural or artificial means. this submission is in relation to [24] who stated that the loss could be attributed to the division of larger water bodies into lesser ones and/or reclaimed to have built-up edifices. these attest to the percentile rise in built/developed region and a fall in bare/undeveloped region, at both time interval of this report (table 1). in view of this, the role of oil spill impact on the environment, and its land cover cannot be rule out. geoinformatics fce ctu 17(1), 2018 11 a. s. tologbonse and e. o. makinde: the cause, effect, and solution of oil spills this poses a grave concern and it’s of relevance of this research. the figures gave a broader analysis to the variation degrees (figure 2 and 3); while, reports on respondents’ feedback further discusses its role in the host community. consequently, the incidence of oil spill in different parts of the world had caused severe technical hitches and dangers to the environment. some of the reasons were mentioned and analysed in this research paper. the cause of oil spill which include pipeline vandalization, pipeline leakage, leakage/spill from tanker and poor maintenance procedure. emphasis on pipeline vandalization with 51.7 % points to the fact that vandals explore this medium to siphon oil product for selfish interest (table 4). statistical gathering indicated that some of the members of the community and environs do collaborate with vandals to siphon oil; same goes for some security personal meant to protect such facility. such actions haven’t always gone down well. researches showed that scores of suspected pipeline vandals mostly lose their life by reason of explosion. for example, scores of dead vandals were recorded on 07 november, 2014 in ijeododo community of lagos. similar event was reported at ilado-community in lagos recording deaths of more than 250 persons, which are far less than 1200 immediate deaths recorded at jesse in the niger delta, nigeria [9, 20]. this findings suggested that either the pipeline were laid in such a manner that encourages easy vandalization or either the safety laws considered while laying the pipelines were not implemented and/or either the security measures put in place to protect oil facilities was a mirage. despite these issues, recent experience in nigeria had shown that the integrity and safety of these pipelines have been continually compromised by the activities of vandals and saboteurs. this vandals fracture the oil pipelines with the criminal intent of obtaining and appropriating petroleum products for commercial purposes or personal use [19]. respondents’ analyses further testified to the fact that pipeline vandalization is the major cause of oil spills, which often leads to fire outbreak. the assessment revealed that out of the 89.1 % of respondents that acknowledged the incident of oil spills in the host community only 51.7 % went for pipeline vandalization as the major cause and this out-ways other causes of oil spills on the environment including the least of all which was leakage/spill from tanker with 2.7 %. this findings identified is expected to solve impending challenges that spans-out of pipeline vandalization and oil spills incidence. these challenges could include economic losses, hazard to human health, increasing poverty level of the communities and reduced rate of community development. environmental pollution, health issues and insecurity are part of the challenges. also, assessment on the host community revealed that only 34.0 % went for environmental pollution, 19.7 % for health issues and 36.7 % went for insecurity as an effect of oil spill. however, showing that oil spill could pollute the environment making it not conducive for human activities. this pollution which affects the air, water and land, mostly results to hardship and negatively hampering healthy living. moreover, facts gathered from the host community reported that the frequent oil pipeline explosion caused by vandals in the area, led to outage of electric power supply and its nonavailability goes for a very long period. this brought hardship and cost of operating other source of power like generators were weighty. the findings also revealed that in 2013 power outage went for seven months (may to december) and similar incident occurred in 2015. therefore, the presence of security around oil facilities and continuous implementation of the law by enforcement bodies in protecting the oil pipeline/facilities in nigeria is of great geoinformatics fce ctu 17(1), 2018 12 a. s. tologbonse and e. o. makinde: the cause, effect, and solution of oil spills significance. its effectiveness could be community based policing which gives the community right to secure the pipelines thereby bringing about employment and reduce continuous attack. in a long run a comprehensive technological system to help monitor pipeline facilities across the nation is encouraged. this agreed with [3], [20] and [21], which states that monitoring of infrastructures such as oil pipelines is better supported by wireless technologies and virtual machines. 4. conclusion the aim of this process was to outline the findings of the weighty effects of oil pollution on communal life, land use/land cover classification, with additional proofs from respondents’ feedback consequently. substantially, all views as shown the impact of oil spill incidences from unlawful activities around oil pipeline/facilities, to its effect on the environment, humans and the ecosystem. the geospatial analysis of the classification also showed the level of changes that occurred within the time interval considered. at most, the ideals of safeguarding the nations human, and natural assert (land, water, forest and vegetation) and man-made monuments (oil pipeline/facilities) in limiting the adverse effect of oil pollution are not negotiable. this research paper put forwards its argument, and its advocating for the need of community based policing, and in a long run a comprehensive technological sensor system, to help monitor, oil pipeline/facilities across the nation. acknowledgements our appreciation goes to the host community and the department of surveying and geoinformatics, faculty of engineering, university of lagos, akoka, lagos state, nigeria for their support. reference [1] abiodun o, et al. “land use change analysis in lagos state, nigeria, from 1984 to 2005”. in: fig working week, marrakech morocco (may 2011), ts09c spatial information processing ii, 5142. [2] agusto and co limited. “industry report, downstream oil and gas”, 2008. url: http: //s3.amazonaws.com/zanran_storage/proshareng.com/contentpages/2461544354. pdf. [3] akhondi m et al “applications of wireless sensor networks in the oil, gas and resources industries”. in: proceedings of the 24th ieee international conference on advanced information networking and applications (aina ’10), (april 2010), pp. 941–948, perth, australia. doi: 10.1109/aina.2010.18. [4] akpofure e et al. “the adverse effects of crude oil spills in the niger delta”, 2000. url: http://www.waado.org/environment/petrolpolution/oilspills_adverseeffects. html. [5] anderson i. “niger river basin: vision for sustainable development”. the world bank washington d.c., 2005, pp. 1–131. geoinformatics fce ctu 17(1), 2018 13 http://s3.amazonaws.com/zanran_storage/proshareng.com/contentpages/2461544354.pdf http://s3.amazonaws.com/zanran_storage/proshareng.com/contentpages/2461544354.pdf http://s3.amazonaws.com/zanran_storage/proshareng.com/contentpages/2461544354.pdf http://www.waado.org/environment/petrolpolution/oilspills_adverseeffects.html http://www.waado.org/environment/petrolpolution/oilspills_adverseeffects.html a. s. tologbonse and e. o. makinde: the cause, effect, and solution of oil spills [6] bello v. communities’ willingness to pay for protection. in: european centre for research training and development uk, 2015, pp. 67-76. http://www.eajournals.org. [7] energy information administration (eia). “annual energy review 2011”. us energy information administration, (september 2012). [8] eyo-essien l. “oil spill management in nigeria: challenges of pipeline vandalism in the niger delta region of nigeria”. national oil spill detection & response agency (nosdra), abuja-nigeria, 2007. [9] fadeyibi i et al. “burns and fire disasters from leaking petroleum pipes in lagos, nigeria: an 8-year experience”, 2011. 37:145-52. [10] hess r and kerr c. “a model to forecast the motion of oil on the sea”. in: proceedings of the oil spill conference, 1979, pp. 653-663. [11] human rights watch. “the price of oil retrieved” (may 2007). url: http://www. hrw.org. [12] ite a et al. “petroleum exploration: past and present environmental issues in the nigeria’s niger delta”. in: american journal of environmental protection, 2013, pp. 78–90. [13] iwayemi a. “nigeria’s dual energy problems: policy issues and challenges”. in: international association for energy economics, 31st iaee international conference, istanbul, turkey, (june 2008). [14] kalu i. “an economic contribution to environmental degradation remediation of the niger delta”. the estate surveyor and valuer, 25 (1). [15] lambin e et al. “are agricultural land-use models able to predict changes in land-use intensity”? in: agriculture, ecosystems and environment, 2000, pp. 321-331. url: http://dx.doi.org/10.1016/s0167-8809(00)00235-8. [16] legborsi s. “the adverse impacts of oil pollution on the environment and wellbeing of a local indigenous community: the experience of the ogoni people of nigeria”, 2007. pfii/2007/ws.3/6 original: english. pg.2. [17] liu j and ashton p. “formosaic: an individual-based spatially explicit model for simulating forest dynamics in landscape mosaics”. in: ecol. modell, 106, 1998, pp. 177-200. url: http://dx.doi.org/10.1016/s0304-3800(97)00191-9. [18] makinde e and tologbonse a. “oil spill assessment in ijeododo area of lagos state, nigeria using geospatial techniques”. in: ethiopian journal of environmental studies & management 10(4) (may 2017), pp. 427–442. issn: 1998-0507. url: https://dx. doi.org/10.4314/ejesm.v10i4.1. [19] okoli c, and orinya s. “oil pipeline vandalism and nigeria’s national security federal university lafia, nigeria”. in: global journal of human social science political science. global journals inc. (usa), 2013. online issn: 2249-460x & print issn: 0975-587x. [20] omodanisi e. et al. “a multi-perspective view of the effects of a pipeline explosion in nigeria”. in: international journal of disaster risk reduction, 2013. url: http: //dx.doi.org/10.1016/j.ijdrr.2013.11.002i. geoinformatics fce ctu 17(1), 2018 14 http://www.hrw.org http://www.hrw.org http://dx.doi.org/10.1016/s0167-8809(00)00235-8 http://dx.doi.org/10.1016/s0304-3800(97)00191-9 https://dx.doi.org/10.4314/ejesm.v10i4.1 https://dx.doi.org/10.4314/ejesm.v10i4.1 http://dx.doi.org/10.1016/j.ijdrr.2013.11.002i http://dx.doi.org/10.1016/j.ijdrr.2013.11.002i a. s. tologbonse and e. o. makinde: the cause, effect, and solution of oil spills [21] omodanisi e. “resultant land use and land cover change from oil spillage using remote sensing and gis”. in: research journal of applied sciences, engineering and technology 6(11) (june 2013), pp. 2032-2040. issn: 2040-7459; e-issn: 2040-7467. [22] rosenblum m and garfinkel t. “virtual machine monitors: current technology and future trends”. in: ieee computer society, 2005, pp. 39–47. url: http://dx.doi. org/10.1109/mc.2005.176. [23] veldkamp a and verburg p. “modelling land use change and environmental impact”. in: journal of environmental management, 2004. url: http://dx.doi.org/10.1016/j. jenvman.2004.04.004. [24] victor t and jiangfeng li. “analysis of land use and land cover changes, and their ecological implications in wuhan, china”. in: journal of geography and geology vol. 3, no. 1 (september 2011). [25] vitousek p et al. “human dominated earth’s ecosystems” in: science, 277(5325), 1997, pp. 494-499. url: http://dx.doi.org/10.1126/science.277.5325.494. [26] trimble s and crosson p. “land use: u.s. soil erosion rates”. in: myth and reality science, 289, 2000, pp. 248-250. url: http://dx.doi.org/10.1126/science.289. 5477.248. geoinformatics fce ctu 17(1), 2018 15 http://dx.doi.org/10.1109/mc.2005.176 http://dx.doi.org/10.1109/mc.2005.176 http://dx.doi.org/10.1016/j.jenvman.2004.04.004 http://dx.doi.org/10.1016/j.jenvman.2004.04.004 http://dx.doi.org/10.1126/science.277.5325.494 http://dx.doi.org/10.1126/science.289.5477.248 http://dx.doi.org/10.1126/science.289.5477.248 geoinformatics fce ctu 17(1), 2018 16 a.s. tologbonse and e.o. makinde: the cause, effect, and solution of oil spills introduction nigeria's oil economy societal effects of oil spills variation detection technique procedure land use/land cover variation classification accuracy analysis respondents' feedback test hypothesis analysis and discussion variation report accuracy report feedback report hypothesis report discussion conclusion _______________________________________________________________________________________ geoinformatics ctu fce 2011 140 preparation and submission of the nomination file of the oasis of figuig (morocco) for inscription on the world heritage list: impacts and uses of a gis laurence gillot* and andré del+, *université denis diderot, paris, france, +espace virtuel de conception architecturale et urbaine (evcau), ecole d' architecture paris val de seine, paris , france. keywords: gis; data communication; archaeological and architectural data abstract: at the request of the municipality of figuig, a team of scientists, working under the supervision of professor jean-pierre vallat of the university paris diderot and the school of architecture paris-val-de-seine, was entrusted with the task of drawing up and inventory and making an analysis of the cultural properties of the oasis. this program has been led in order to assist the local authorities in the preparation of the nomination file for the inscription of the oasis on the world heritage list. the oasis is regarded as a matter of fact as a cultural landscape, composed by an important cultural heritage, both material and immaterial. figuig is indeed characterized by a rich architecture, particularly the ksour (fortified villages) with mud brick houses. figuig also comprises a palm grove irrigated by a complex network of canals and “foggaras” (pits). moreover, all the individual and collectives practices connected to the palm grove and to the ksour constitute an important immaterial cultural heritage. the bulk of scientific data (from archaeological, geographica l, historical, anthropological investigations) calls for a coherent archiving in order to insure the heritage, environmental and tourism management of the oasis. for this purpose, a gis would be useful. as a scientific and management tool, the gis is a precious device which makes it possible to produce thematic (archaeological, historical, touristic, etc.) mappings and inventories. in parallel with these scientific initiatives, the training of the various stakeholders in the practice of the gis is being developed. individuals from the municipality, the cooperation offices and the tourism sector are thus developing new competencies. in this respect, the gis should be a shared tool with multiple applications: scientific researches, heritage management, urban development, tourism management, etc. in this context, this paper sets to analyse the stakes, perspectives and applications of the gis regarding the necessary development of the oasis whilst protecting its heritage, and ensuring good governance, transparency and justification in the framework of generally binding protective measures. 1. the oasis of figuig and the urgency of protection figuig is an oasis situated in the south eastern tip of morocco, approximately 400 km south of the mediterranean sea and 7 km away from the algerian city of beni ounif. however the border between both countries is closed today, resulting in the isolation and enclosing of the city. consequently, figuig has gone through an important demographic exodus and a drastic reduction of its resources. one of the consequences of this exodus is an important degradation of the heritage, thereby making its classification as national and world heritage even more essential. at present, figuig is constituted by an urban nucleus consisting of seven « ksour » (laâbidate, lamaïz, hammam foukani, hammam tahtani, loudaghir, ouled slimane and zenaga) and of more recent districts. the ksour are distinct practically autonomous communities. every ksar possesses its own area of palm grove shared by families who exploit small plots of land surrounded with walls, “the gardens”. the oasis of figuig is a coherent, material and cultural set, based on the complementarity between the architecture and the spatial organization of the ksour, the palm grove and its irrigation system, and all the social and cultural practices, which constitute an immaterial heritage of a great importance. the society of figuig has developed a particular mudbrick architecture reflecting its organisational structures. figuig‟s material heritage is rich, both from an architectural and archaeological point of view: great walls, ramparts, watchtowers, mosques, mausoleums, irrigation channels, as well as rock drawings. in all, the oasis is a valuable natural and environmental heritage, as witnessed by its water springs and palm grove. but this heritage suffers a lot of degradation. in this context, the scientists' team supervised by professor jean-pierre vallat of the university paris diderot was contacted by figuig‟s municipality, almost 5 years ago, to assist it in its heritage protection policy, concretized by its will to inscribe the oasis on unesco‟s world heritage list (whl). the master degree " city, architecture, heritage", co-organized by paris 7 (ufr ghss) and the school of architecture paris val-de-seine (ensapvs) was involved from the outset. numerous students participated in the architectural recording works, in archaeological excavations and in the study of the unesco submission file. this diversity of activities and researches _______________________________________________________________________________________ geoinformatics ctu fce 2011 141 made a coherent filing essential to insure the reproduction of the data and to produce the necessary multidisciplinary analyses. the mapping of the ksour, in order to allow their protection and visit, and the cartographic analysis of the evolution of the palm grove made it imperative to implement a shared gis. beyond its analytical capability, the gis is a precious device making it possible to produce useful thematic maps. this paper thus discusses the methodology and the uses of the gis, particularly in the context of the preparation and submission of the nomination file of the figuig‟s oasis on the world heritage list, and more widely in the context of the heritage, environmental, tourist and urban development management of the oasis. figure 1: map of figuig and its ksour (a. del and n. goumeziane) figure 2: altimetric map and protection perimeter 2. studies and production of the nomination file of the oasis of figuig (morocco) for inscription on the world heritage list for the purposes of the nomination file, the french team considered at first the type of available and relevant sources and then thought about the way of assembling the information collected. in this context, the gis was considered as a tool suitable for arranging a set of diverse and scattered information in a logical way. first, we shall present the available sources and then describe the methodology applied to assemble these data and establish cartographic databases. 2.1 sources 2.1.1 historical sources the textual sources are mainly constituted by travellers‟ accounts published since the renaissance. the travellers, arabic first, then europeans, who crossed the caravan roads of the sahara, usually stopped in the oasis of figuig. most of these accounts describe an expanding oasis, constituted by a variable number of ksour throughout the times; this illustrates the permanent stakes connected with the appropriation of water and the resulting conflicts. archives relating to antiquity and the middle ages in the region are lacking. the archaeology makes it however possible to fill these gaps, and we shall deal later with the need to order and compare the data supplied by archaeological excavations to the various categories of sources presented here. among the first travellers to have described the oasis, ibn khaldoun (1332-1406)1 and léon l‟africain (14řř-1550)2 gave brief descriptions of figuig, which was then part of the zianide 1 ibn khaldūn (abū zayd ‘abdu r-rahman bin muhammad bin khaldūn al-hadrami, may 27, 1332 ad/732 ah – march 19, 1406 ad/808 ah) was an arab historiographer and historian born in north africa in present-day tunisia and is sometimes viewed as one of the forerunners of modern historiography, sociology , and economics. 2 jean-léon l’africain (el hasan ben mohammed el-wazzan ez-zayyâti) was a traveler and a fine political negotiator. in 1518, captured by the knights of the order of saint jean, he is presented as a gift to pope léon x, who has him catechized and baptized under his own names, jean léon. http://en.wikipedia.org/wiki/north_africa http://en.wikipedia.org/wiki/tunisia http://en.wikipedia.org/wiki/historiography http://en.wikipedia.org/wiki/sociology http://en.wikipedia.org/wiki/economics _______________________________________________________________________________________ geoinformatics ctu fce 2011 142 kingdom. travel stories for the xixth and xxth centuries are more numerous. among the french and german travellers, gerard rohlfs (1831-1896)3 and jakob schaudt (1865-1940)4 were the first ones to have visited figuig. after them, isabelle eberhardt (1877-1904)5, adventuress and journalis, also visited the oasis of figuig at the beginning of the xxth century. another traveller, anne levinck6, novelist and organizer of literary salons in paris, published in an article in 1885 the account of her journey to figuig, although certain historians have strong doubts in this regard (see map below). among the textual sources, two other overlapping categories have attracted our attention; the "geographical" sources and the "military" sources. first of all, french geographers of the beginning of the xx th century led researches on the oasis of figuig, mostly published in the annals of geography. as an example, the article of émile-félix gauthier (1864-?)7, professor at the university of algiers, reports oral information relating to the history of figuig and to the conflicts over the appropriation of water resources in the palm grove. in addition to the text itself, the article also contains a map, which we shall deal with later8. the archives of the french army and the colonial administration constitute another type of invaluable textual sources in relation with the previous ones. geographer e.-f. gauthier and geologist louis gentil thus maintained narrow links with the french army. these military archives do not only deal with strategic issues but also look at societies, their lifestyles and practices. finally, the various archives of the municipality represent a last category of textual sources. an inventory, classification and translation of these documents should be carried out in the future. after all, figuig seems to have been a source of inspiration for a lot of travellers, geographers and other intellectuals, as can be seen in the numerous studies on the society, the economy and the culture. but all these rich and scattered archives only offer partial information on the oasis the accuracy of which is hard to ascertain. these data are however precious to study the landscaped evolution of the oasis within the framework of the unesco nomination file and the management of the oasis. indeed, by comparing these sources with other types of data, we could identify the transformations, maybe even the damages, of the palm grove and suggest a rehabilitation of this valuable natural and cultural heritage. 2.1.2 iconographic sources and ancient maps the iconographic sources are mainly constituted by photos and postcards published since the beginning of the xxth century, while engravings and drawings of the previous periods are rare. most of these sources are available with the inhabitants, in the archives of the municipality of figuig and on the internet. in this respect, the dissertation of master's degree of gwenaëlle janty, made within the framework of the french researches in figuig, collected an important collection of ancient photos, which were, to the extent possible, located on the present site. as regards the ancient maps, numerous cartographic representations do exist in various scales and diverse dates. but or they raise simplistic representations without real dimension, or they are in too small scales to contain relevant information. these maps are never in a modern reference system. in order to use them, it is thus necessary to re-project them in a known reference system. for that purpose, besides the qualities of precision of the initial map and its reproductions, over which we have no control whatsoever, we need to locate, without ambiguity, places and landmarks recognizable beyond any doubt both in the ancient map and in the reference system used for the whole gis. 2.1.3 cartographic references the 1983 topographic plan is the first to be based on source data. the plan was realized by the smpt of rabat for the delegation of the housing environment and the town and country planning of the province of oujda. the plan is crossruled in “lambert nord maroc”. the french team also realized an aerial view in 2008 by extracting 20 google earth “shots” with a constant zoom level equivalent to the perception of an air vision from 2000 m. the projection of these two views in a coherent reference system is essential for the analysis of the evolution of the palm grove, in particular for the representation of the system of irrigation (see below). 3 friedrich gerhard rohlfs (been born on april 14th, 1831 in bremen and died on june 2nd, 1896) was a german geographer and explorer of africa. he wrote numerous works on morocco and the north of africa. 4 converted to the islam, jakob schaudt explored morocco for several years, and brought back to the minister of germany in tangier, numerous samples of ores of all kinds. 5 isabelle eberhardt (17 february 1877 – 21 october 1904) was a swiss explorer and writer who lived and travelled extensively in north africa. 6 anne levinck is suzanne lambert's literary pen name, born in lyon in 1851, died in algiers in 1898. 7 emile-félix gautier was born in clermont-ferrand on october 29th, 1864. named professor to the school of the letters of algiers, he embarks on the exploration of sahara. he pulled in 1908 a fundamental work " algerian sahara " which renews the geography of the big desert. 8 e.-f. gauthier, « la source de thaddert à figuig », annales de géographie. 1ř17, t. 26, n°144. pp. 453-466. http://en.wikipedia.org/wiki/switzerland http://en.wikipedia.org/wiki/explorer http://en.wikipedia.org/wiki/writer http://en.wikipedia.org/wiki/north_africa _______________________________________________________________________________________ geoinformatics ctu fce 2011 143 figure 3: ancient map by levinck (1885) figure 4: topographical map by gauthier (1914) 2.1.4 oral sources an important part of the sources used by historians, geographers and archaeologists are oral and bound to the memory figuig‟s inhabitants. recording this memory is very important and is currently in progress through systematic enquiries. these enquiries are registered by dictating machines and should be re-transcribed in the future. ordering the data collected and reconciling it with other sources is a necessity. 2.1.5 archaeological sources archaeological excavations led by jean-pierre vallat's team since 2005 in different sites in the oasis (ksar loudaghir and ksar ouled jaber) resulted in the gathering of a rich and varied documentation. at the level of the excavated material, the missions brought to light hebrew manuscripts, diverse fragments of ceramic, metallic objects, etc. it is more and more imperative to draw up an inventory of this material in order to study it and interpret it, which would make it possible to establish a timeline in the occupation of the palm grove since the antiquity. besides, the other data produced by the archaeologists must be stored, inventoried and combined to other sources, in particular the plans drawn up during the excavations (plans of ksar, of houses, etc.), as well as all the notes drafted during the excavations and in connection with the stratigraphic and topographic units excavated. 2.2 constitution of the gis the name "geographical information system" covers two aspects: a set of information gathered by an assembly of coherent spatial references (which in a more explicit way means sirs: system of information with spatial reference) and one or several specific software(s) capable of managing and restoring these sets of information by referring at the same time to their two constituents: thematic and geographical. in both cases, a gis is intended to gather and to allow the analysis and the restitution of a set of information relating to the same territory. 2.2.1 unification of sources in a single geographical reference system the geographical situation of the study as well as its frame was not appropriate to make available homogeneous and good quality sources, particularly cartographic ones. the gis has the potential of allowing geographical unification and sharing of information between entities. in addition to the mapping capabilities which are the most visible ones, we have widely used the processing resources and digital organization of the information offered by the gis. three cartographic sources were available in heterogeneous reference tables: topographic plan of 1řř3 in “lambert nord maroc”, the 200ř aerial view stemming from google earth in coordinates wgsř4 and the outlines of networks on a crisscrossed map, but without an explicit reference system. the unification in the same system of these various maps took place in several stages. 14 boards of the topographic plan were scanned and assembled by curling on the basis of coordinates common to two successive boards. this assembly is at the same time geo-referenced by the identification of known landmarks to which we assign coordinates. the assembled and geo-referenced topographic plan is used as a reference map to which we associate, according to a similar procedure, aerial views and maps of networks. the residues of adjustment of the wedging algorithm are archived in metadata associated with every map, and they constitute a quality indicator of that (grosso e., 2009). _______________________________________________________________________________________ geoinformatics ctu fce 2011 144 figure 5: cartographic sources and assemblage 2.2.2 sharing of the assigned information a gis is based on the principle of systematic individual association of a set of information (the attributes) in every realization of the geographical entities. this duality database/maps makes it possible to share these attributes between all the objects of the same geographical reference system. for example, we allocate the attributes of a ksar to all the gardens whose polygons of plots of land are contained in the polygon which delineates the palm grove of the ksar. for the allocation of the attributes of a network in a garden, the procedure consists of two stages. for the analysis of the impacts of the irrigation on the gardens of the palm grove, we build, around every section of network, a polygon (corridor or buffer) to associate, by intersection, the attributes of networks with gardens. the information is mainly grouped together at the level of the plots of land–gardens which constitute the nucleus of the information system (which also corresponds to a physical reality). _______________________________________________________________________________________ geoinformatics ctu fce 2011 145 figure 6: ksour and irrigation networks (g. janty) figure 7: landscape transformations (g. janty) 3. results and uses of the gis 3.1 archaeological use the application of the gis in the archaeological domain turned out to be very useful in the programming of archaeological excavations as well as in the interpretation of the data which arose from it, as witnessed by the campaign of the ksar ouled jabar in 20099. the qsar of ouled jaber is on the plateau of figuig and peaks between 907,40 m and 903.10 m. at the edge of the plateau, it overhangs the palm grove and is situated between the qsours of loudaghir and laâbidate. at first, the information collected only made it possible to draw up a thematic mapping making proposals about the location of historic and geographical objects. due to the lack of data (in particular a too strong disparity of the data), we could not finish the geodatabase, which only relates at this stage to the sector of ouled jaber. also, this geodatabase was mainly designed for the purposes of heritage management and in order to give advice to figuig‟s municipality in respect of the rehabilitation of the site, at present transformed into a rubbish tip. the gis makes it therefore possible to emphasize the historic and cultural interest of the site by giving keys to interpretation of vestiges still in position. the work of collection and creation of data was carried out in three phases. the first stage of the work consisted in the collection of planimetric maps and their georeferencing (see above). georeferencing was realized from a gps points scatter and points collected from google earth. once this had been carried out, photo-interpretation of the google earth image took place. two days of prospection completed this first interpretation to characterize and locate the vestiges and appearing structures of the site. this work made it possible to bring to light a strong degradation of the state of preservation of the ruins between 1917 and 2009, underlining the urgency to draw up an inventory and to encourage the municipality to make decisions for their preservation. so, on a scientific level, the gis made it possible to identify two distinct entities, the first one corresponding to the palm grove and the other one to the residential area. a coherent plan in the western and northern part was identified and could correspond to a cob wall, formed by an enclosure, three towers and two doors. the hypothesis of a third door in the western part also seems to emerge. the confrontation with the plan of 1917 led to rectify the plan of the surrounding wall to make it pass in the ksar of laâbidate. the following campaigns led in october, 2010 and april, 2011 aimed at locating ruins and places represented on photographs and ancient maps with a view of drawing up a map of the archaeological potential of the whole oasis, on the model of what was made for the ksar of ouled jaber. for example, the map by anne levinck presented above could, in spite of its unorthodox character from a topographic point of view, contain relevant information in respect of the location of disappeared or buried structures. another application is to associate the gis with the works led for the digital restoration of the mosque of the ksar, and in the longer term, to allow a restoration of the ksar in its initial stage before abandonment and degradation. 9 r. gonzalez villaescusa et c. pichard, rapport préliminaire de fouille : première évaluation du potentiel archéologique du qsar ouled jaber (figuig), (7 octobre 20 octobre 2009), paris. _______________________________________________________________________________________ geoinformatics ctu fce 2011 146 figure 8: excavation site of ouled jaber (c. pichard) 3.2 heritage management use the gis makes it possible to structure the inventory of buildings and remarkable places of the oasis. within the framework of the preparation of the nomination file for the inscription of figuig on the whl, the gis was the support for the consideration in respect of the demarcation of the perimeter of protection and the buffer zone, which took place in three stages. the first discussions with the municipality of figuig led to a first proposition regarding the perimeter. the transfer of this perimeter in the gis and the spatial parallel made between the perimeter and all the patrimonial elements stemming from preliminary studies, highlighted its inadequacy, in particular in the central part of the oasis, which is the administrative district. a second perimeter was proposed and accepted for the submission file. for the buffer zone, a digital treatment made it possible to determine a limit 200 meters away from the proposed perimeter. t his automatic treatment did not take into account of the realities of the urban expansions of the oasis and it was therefore necessary to make several changes, most of them by extension of the perimeter. but after several iterations it did not seem justified to maintain such a perimeter: little or no "alive" parts of figuig are outside the buffer zone. the idea to propose the extension of the zone to the whole oasis is finally accepted (see above, fig.2). 3.3 developmental use the gis could accompany the unavoidable development of the oasis while protecting its heritage, as shown in the study carried out to identify the most suitable sites for welcoming a solar energy firm. three sources were used: the altimetric data of the oasis, the location of the main ancient buildings and the perimeter of protection proposed in the submission file for the inscription on the whl a first altimetric data processing has made it possible to build a digital terrain model (dtm). from the dtm we establish the map of exposure to the sun. the geographical orientation is first calculated for every surface unit. surface units suitable for receiving solar panels are then selected: east, south-east, south, southwest, west (degraded of yellow on the left map). we finally calculate zones of no co-visibility where it is possible to establish constructions that are not visible from the main heritage buildings (in green on the right map). the logical combination of both maps: [{convenient zones} x not visible zones} makes it possible to determine the most suitable perimeter for installing solar panels. _______________________________________________________________________________________ geoinformatics ctu fce 2011 147 figure 9: co-visibility and suitable zones for exposure 3.4 touristic use anxious to develop a sustainable tourism, the municipality of figuig promotes a reasoned development of the activity, hoping that this one can contribute to the socio-economic and cultural development of an oasis in decline. in this context, various projects were undertaken, in particular with the assistance of mon-3, the spanish co-operation organization, and " africa 70 ", an italian ngo. the french team participated in the realization of descriptive panels, by producing texts and maps for the panels placed at the entrance of the city and the entrance of one of the 7 ksour, the ksar loudaghir. in the future, it is also planned to prepare tourist routes, focusing on areas studied by the french researchers, in particular the archaeology and the palm grove. in this context, a gis would allow to exploit the scientific and patrimonial data to develop the figuig‟s heritage. the gis does indeed facilitate the identification and the location of the main resources and makes it also possible to prevent the potential negative impacts of the tourist activity. concretely, the work led until now led to the implementation of the tourist descriptive panels. the paris 7 -enspavs team also proposed a tourist route in the ksar loudaghir, which could be extended to the other kso ur. this project was launched in 2007 by sarah khazindar and camille vallat within the framework of their study, and finalized by nabila gouméziane in 200ř. it consists in setting up a signalling system following rather simple graphics standards, which can guide and direct the tourists and which will allow them to move inside the ksar without difficulty by signalling the pedestrian route. georeferenced backgrounds, as well as the located inventories of remarkable buildings, have been used to realize the tourist maps, panels and thematic routes in partnership with the ngo africa ‟70. 3.5 a tool of cooperation between local actors, the moroccan state and the actors of cooperation a training campaign in the practice of the gis for the various agents of the municipality and the offices of cooperation was launched in parallel with these scientific initiatives. the purpose is to put the gis at the disposal of local actors so _______________________________________________________________________________________ geoinformatics ctu fce 2011 148 that it constitutes a common tool of management of the policy of heritage protection and urban development. in the context of the unesco nomination file, the gis, by its capacity to represent the perimeter to be protected, can be a communication tool between experts, local actors and the moroccan central authorities. indeed, the municipality has to convince the ministry of culture of the importance to register the oasis on the indicative list and to support the file to be submitted to the unesco. the gis will thus constitute the “pivot” of the implementation of the management plan described in the submission file. it will make good governance possible to ensure transparency, sharing and understanding of the constraints which will inevitably accompany the protective measures of the heritage. 4. conclusion in this study, we have shown the potential of the gis in several respects. the gis makes it possible to gather varied scientific information and to produce thematic mappings. it is characterized by multiple applications. it is at the same time a scientific, heritage, environmental and tourist management tool, at the disposal of various users. within the framework of this paper, we particularly insisted on the importance of the gis in the preparation of the nomination file for the inscription of the oasis of figuig on unesco‟s world heritage list. considering the difficulty for the local actors to understand and to bring to a successful conclusion the process of inscription on the whl, the gis also turned out to be a communications tool, allowing the scientific team to collect and to represent scattered information, to explain the relevance of a classification as world heritage and to propose concrete measures. the debates concerning the determination of the buffer zone bear witness to the multipurpose nature of the gis. for the scientists, it allowed to collect structure and analyze a scattered documentation. it opens up future prospects researches, in particular on the archaeological level, where the gis could be the support for the management and study of the material and the information, as well as a tool suggesting digital restorations of places and monuments. this globalizing, structuring aspect of the gis comes along with an evolutionary and flexible character, as the gis can adapt itself to take account of new pieces of information. as a matter of fact, the gis testify of the plasticity of the digital information. finally, and it is one of the objectives of the french team cooperating with the municipality of figuig, the gis could be considered as a good governance tool. through the training of local actors, it is a means of communication and action, spreading information and raising awareness of the local populations. it could also help to develop top-down approaches to the management of the heritage and of the urban development of the oasis. 5. references [1] abbou, a., boilève, m.: figuig, la ville oasis du maroc oriental, paris, la croisée des chemins, 200ř. bencherifa, a., popp, h.: l’oasis de figuig : persistance et changement, rabat, publications de la faculté des lettres et des sciences humaines, série : essais et études n°3, 1řř2. [2] djenandar, e h., contribution à la lecture des écosystèmes des paysages des oasis de la haute zousfana : béni ounif (algérie occidentale), rennes, thèse de géographie, université renne 2, 2003. [3] eastman, r.: un sig en mode image: idrisi. trad c. colle, fribourg, université de fribourg, 1řř5. [4] forman, r.t.t., godron, m.: landscape ecology, new york, john wiley, 1986. [5] gautier, e. f.: la source de tzaddert à figuig, annales de géographie, t. 26, n°144 (1ř17), 453-466. [6] godron, m.: ecologie de la végétation terrestre, paris, masson, 1984. [7] gonzalez villaescusa r., pichard cl., rapport préliminaire de fouille: première évaluation du potentiel archéologique du qsar ouled jaber (figuig), (7 20 octobre 200ř), paris, université paris diderot, 200ř. [8] goumeziane n.: projet de demande d’inscription de l’oasis de figuig au patrimoine de l’humanité par l’unesco, mémoire du master 2 vap, université paris diderot, sous la direction de j-p vallat. 2010. [9] grosso, e.: proposition pour une gestion unifiée des données anciennes, sageo, vol. 7 (2009), 1-16. [10] janty g., cohen m., godron m.: la palmeraie de figuig, paysage de l’eau, patrimoine de l’humanité ?, acte colloques international lped / imep : "usages écologiques, économiques et sociaux de l'eau agricole en méditerranée: quels enjeux pour quels services?" université de provence, centre saint charles, marseille, france 2021 janvier 2011. [11] laurini, r., milleret-raffort, f.: les bases de données en géomatique, paris, editions hermès, 1řř3. [12] mizbar, s.: résistances oasiennes au maroc, aux racines du développement. recherche sur l’évolution des oasis dans la province de figuig, paris, thèse, université paris 7 denis diderot, 2004. [13] pantazis, d., donnay, j.p.: la conception de sig, paris, editions hermès, 1řř6. [14] pumain, d., saint julien, th.: l'analyse spatiale, paris, armand collin, 2010. [15] ruas a.(dir.): généralisation et représentation multiple, paris, editions hermès, 2002. [16] zaïd, o., figuig (maroc oriental) : l'aménagement traditionnel et les mutations de l'espace oasien, paris, thèse, université paris 1, 2 vol., 1992. ________________________________________________________________________________ geoinformatics ctu fce 2011 330 3dmadmac|spectral: hardware and software solution for integrated digitization of 3d shape, multispectral color and brdf for cultural heritage documentation robert sitnik1, grzegorz mączkowski1, jakub krzesłowski1, tomasz gadziński1 1warsaw university of technology, faculty of micromechanics and photonics boboli 8, 02-525 warsaw, poland r.sitnik@mchtr.pw.edu.pl g.maczkowski@mchtr.pw.edu.pl j.krzesłowski@mchtr.pw.edu.pl keywords: 3d shape measurement, structured light, multispectral imaging, brdf, 3d printing abstract: in this article a new 3d measurement system along with the study on 3d printing technology is presented from the perspective of quality of reproduction. in the first part of the paper the 3dmadmac|spectral system which integrates 3d shape with additional color and angular reflectance measurement capabilities is presented (see figure 1). the shape measurement system is based on structured light projection with the use of a dlp projector. the 3d shape measurement method is based on sinusoidal fringes and gray codes projection. color is being measured using multispectral images with a set of interference filters to separate spectral channels. additionally the set up includes an array of compact light sources for measuring angular reflectance based on image analysis and 3d data processing. all three components of the integrated system use the same greyscale camera as a detector. the purpose of the system is to obtain complete information about shape, color and reflectance characteristic of mea sured surface, especially for cultural heritage objects in order to create high quality 3d documentation. in the second part of the paper the 3d printing technology will be tested on real measured cultural heritage objects. tests allow to assess measurement and color accuracy of reproduction by selected 3d printing technology and shed some light on how current 3d printing technology can be applied into cultural heritage. 1. introduction preservation of cultural heritage is an important task for every modern society. recently tools which employ new technology for scanning and digitization emerge. they facilitate precise, contactless measurement of artifacts‟ features such as shape, color or reflectance distribution. there are already several solutions which address this problem, for example simon et. al. uses separate laser 3d scanner and custom made multispectral camera and develop a method of manual merging of shape and color data [1], whereas mansouri et. al. propose an integrated system which uses structure light projection for shape measurement and multispectral camera for color acquisition [2]. none of these solutions address a problem of reflectivity measurement. an approach presented in this work is based on an integrated measurement system which is capable of automatic measurement of shape using structured light projection technique, color with the aid of multispectral acquisition and angular reflectance distribution. the presented measurement system utilizes a single ccd (charge-coupled device) detector for all above kinds of measurement, which makes it insensitive to alignment of data which come from different phases of the integrated measurement. the measurement device along with data processing path is presented in the first part of this paper, whereas the second part is dedicated to a 3d printing technology which gives completely new opportunities in this field. it allows for creation of real copies of the previously digitized objects for promotional and educational purposes. we conclude our work with examples of cultural heritage objects from central office of measures in warsaw, poland which were scanned in order to create their digital models and then printed as a test of copying technology and the whole data processing path. 2. measurement system the measurement system used in this work consists of a developed 3d scanner based on structured light projection, multispectral camera and a specially designed device for angular reflectance measurement. all these components are ________________________________________________________________________________ geoinformatics ctu fce 2011 331 described in more details below, along with data processing procedures. additional subsection approaches a 3d printing procedure which allows copying of registered objects. 2.1 3d shape measurement the 3d shape measurement method uses the (3d) measurement with algorithms of directional merging and conversion (3dmadmac) system [3]. this method of measurement is based on a structured light technique with digital sine patterns and gray codes projection [4]. collected images allow for calculation of absolute phase which corresponds to surface coordinates in the direction perpendicular to image plane that is along the depth of the measurement volume. on the other hand transverse surface coordinates are possible to calculate, because in the previous process of geometrical calibration each pixel of the detector has assigned a ray which intersects the image plane in specific point. the single measurement results in a cloud of points which represents shape of the scanned surface, where every point maps to a single pixel of the detector. the system consists of a digital light projector (dlp) and a matrix detector (an industrial charged-coupled device (ccd) camera or a digital camera). its components along with a measurement geometrical setup are shown in figure 9. the 3dmadmac system can be customized depending on end user requirements regarding size of measurement volume, amount of measurement points and duration of a single measurement. it also consists of a set of software development kit tools which extend its functionality and automate all required measurement and data analysis algorithms. figure 9. 3d shape measurement system concept. 2.2 multispectral color measurement color of a measured surface is captured with a multi-spectral approach. the aim of this solution was to estimate reflectance spectrum of the measured surface in every point, so that its color can be calculated independently of illumination conditions and appearance under different light sources can be simulated. this approach is especially important in case of cultural heritage objects, because it gives meaningful information for art conservators regarding storage and display of artefacts. the custom built camera was built to register images in 10 spectral bands with the aid of interference filters. the filter wheel is placed between the camera matrix and the lens. it has 11 slots, because additional empty window without a filter is necessary for performing shape measurement which uses the same detector ( figure 10). the lens mount is located outside the case which allows for simple lens replacement according to required measurement conditions. the multispectral capture system uses analytic calibration procedure described in previous work [5]. it is based on capturing images of white reference plate for light source spectrum compensation and images of uniform background to compensate for illumination distribution and spectral filters' angular characteristic. additionally transverse shifts of images due to positioning errors can be eliminated. ________________________________________________________________________________ geoinformatics ctu fce 2011 332 figure 10. the concept of multispectral color measurement system and directional illuminators for brdf measurement. 2.3 angular reflectance measurement another device is employed to measure angular reflectance distribution of surface. the result is a bi-directional reflectance distribution function (brdf) modelled with phong parameters [6]. the device comprises of a set of light sources distributed on a grid pattern and illuminating measurement volume. each illuminator estimates lambertian source and allows directional illumination of the investigated surface ( figure 10). pictures are captured with all illuminators turned on sequentially which gives a collection of reflectance values in the function of illumination angle and serves as a brdf estimation which leads to phong parameters calculation [7]. 3. data processing raw data coming from the measurement are simply black-and-white images and a set of operations is needed to extract meaningful data from them. for clearer description they can be divided in a similar way as measurement stages because the data coming from them are for the most of the processing path independent and they are combined into concise digital model at the end. 3.1 3d shape calculation the result of the shape measurement is a cloud of points which is a set of vertices distributed in a cartesian space which resemble the location of surface of the measured object. in order to calculate it from measurement images a calibration data is necessary. it consists of straight lines‟ equations associated with every pixel of the detector, calculated in real units and a phase distribution interpolated over whole measurement volume, calculated from phase shifted sine fringes which are projected during the measurement procedure [chyba! záložka není definována.]. a cloud of points is a result of measurement from a single direction, so in order to scan the whole surface of arbitrary shape the measurement procedure should be repeated for multiple directions. when clouds of points are calculated the mask which can be applied to measurement images is known. it indicates which pixels from these images belong to the scanned surface. in other data processing procedures only these points are being calculated, which means that only the data associated with points in the cloud are further processed. in the next step the clouds of points from several directions are merged in order to estimate the whole measured surface. because the object is placed in the measurement volume manually the individual clouds have different local coordinate systems and they need to be fitted together. automatic algorithms in 3dmadmac system are used to transform the clouds into a single, global coordinate system. they use two steps. in the first one a coarse fitting is achieved based on local curvature or texture distribution of the overlapping parts of the clouds. the second stage uses icp algorithms for precise fit [8]. once clouds are fit they are filtered and smoothed to eliminate noise and ill-calculated coordinates. afterwards they are simplified to reduce the amount of data and finally a triangulation procedure [9] is applied to create a mesh which is possible to export in vrml format and display in standard 3d imaging and manipulation software. ________________________________________________________________________________ geoinformatics ctu fce 2011 333 3.2 color calculation images acquired during measurement correspond to different narrow spectral ranges and they estimate the spectral response of the measured surface combined with the characteristic of the light source. proposed calibration with the use of white reference plate and uniform background helps to extract reflectance spectrum [chyba! záložka není definována.] in 10 intervals corresponding to interference filters used in the experiment. after that data are extrapolated over whole visible spectrum by the means of spline fitting between measurement points. the spectral reflectance calculated this way for every point in the cloud is used for color estimation in xyz coordinates according to cie rules [10], with previously chosen illuminant d65. further processing employs color calculation in srgb color space. finally overlapping data from different directions are used to equalize lightness variations which exist because of non uniform illumination distribution during measurement. after above procedure it is possible to create texture for independently obtained triangle mesh. first the surface of the object is divided into separate areas with small variations in normal vectors directions and averaged normals are treated as independent texture mapping directions. this allows dividing color data into patches which are later mapped to different parts of the triangle mesh. this approach allows for precise texture mapping regardless of the measured shape. 3.3 brdf calculation the set of images acquired with illumination from different directions allows estimation of bi-directional reflectance distribution function in every point of the surface. the fast algorithm [chyba! záložka není definována.] which calculates coefficients of the phong model is used [chyba! záložka není definována.]. it assumes that the brdf is symmetrical and does not depend on light wavelength, but nevertheless it gives interesting results for surfaces made from different materials, as expected. three phong parameters for diffuse, specular and shininess coefficients are stored as an additional texture which uses the same mapping as the color texture described previously and can be used as a material information in 3d rendering software. data obtained this way are used for displaying in virtual environment only and does not take part in copying procedure which exploits shape and color. 4.3d printing there are several 3d printing methods currently available on the market. among many properties they differ in material used to create models, precision and speed. an interesting solution is 3d printing (3dp) technology which is the only one which offers possibility to print in color. this gives a good opportunity for copying models of cultural heritage objects, because color reproduction is an important aspect of their appearance. therefore this method was applied and evaluated in the presented research. 3d printing is the fastest known method of producing 3d models in the rapid prototyping technology [11]. the use of the method is incremental (additive). the printing process is carried out in several stages. initially, the created computer model is uploaded by a specialist into print management software. then it is subdivided into hundreds of horizontal layers. the printer then applies a thin layer of powder on the ground in the working chamber with a roller. then a layer of a color adhesive (glue) is spread on the powder surface by the printing head. a place where the cartridge is leaving the adhesive corresponds to the contours of the model in a given crosssection (layer). the procedure is shown in figure 11. after applying all layers of the powder and glue, the printed model needs to be cleaned of non-glued plaster and to be hardened. color on the surface is obtained thanks to separate heads spreading glue (resin) in different base colors: cyan, magenta, yellow, black and colorless which are mixed to create desired color similarly as in traditional ink-jet printing technology. 4.1 shape reproduction in this study a zcorporation zprinter 650 machine was used. according to the manufacturer, it can print models in 390,000 colors with a resolution of 600×540dpi. the size of the working chamber is 254mm × 3ř1mm × 203mm. single layer thickness lies between 0.089 and 0.102mm. printed wall thickness may amount to 0.1mm. many factors resulting from applied technology lead to spread in dimensions of created models. these include movement of the shaft scattering successive layers of plaster; positioning and movement of the printing head and platform in the print chamber; spreading of the glue; gravitational compaction of powder layers (a phenomenon of squash) as well as temperature and humidity. conducted evaluation established several guidelines for more precise printing which makes a full use of the printer‟s capabilities. among the most important ones; the longest dimension of the model should be placed in the horizontal plane. for the vertical planes projection is weaker. on the other hand long elements that extend beyond the main block of the model should be placed parallel to the acting forces of gravity which may be contradictory to the previous assumption. alternatively long items that cannot be placed as suggested shall be printed with so called supports. this shows that individual decisions should be made according to specific printed shapes. the greatest distortions caused by the phenomenon of squash arise in the middle of the workspace. one way of minimizing this effect is to put the most massive elements of the model on the bottom so that the loose layer of plaster between the platform and the model was as small as possible. other recommendations suggest that humidity and temperature in the ________________________________________________________________________________ geoinformatics ctu fce 2011 334 room where the printer is placed should be appropriate (according to the manufacturer's recommendations) and fixed. additionally visible deterioration in surface reproduction is present due to wear of printing heads. used heads cause weaker representation of the flatness of the surface. the models are rough; one can see places where each layer was stuck. in the following application these suggestions were taken into consideration with good results. figure 11. procedure of 3d printing. 4.2 color reproduction separate aspect of printing considers color reproduction. the manufacturer does not provide any automatic profiling tool for translating color into limited color space of the 3d printer. the only guideline is a set of 729 color samples uniformly distributed in rgb color space which are prepared to print as a reference map of colors possible to reproduce by the printer. the user is supposed to compare color patches and manually choose the one that best resembles the color which he intends to reproduce. obviously this approach will be impossible to apply when printing an object with complex texture where every point may have different color. therefore a profiling method for the printer was proposed. it uses mentioned color patches which were measured by the spectrophotometer to establish their cie lab coordinates. the acquired data constitute a 3d look up table (lut) which provides transformation between rgb values sent to printer and device independent color space. the problem to solve is to transform an arbitrary lab coordinates, not necessarily existing in the lut into the printer color space (red ball in figure 12). the proposed solution first finds the nearest lab coordinates which belong to the lut and its immediate neighbours (blue balls). all of found coordinate sets have their representations in printers rgb color space which are used to calculate unknown rgb coordinates as their weighted average. the weights are established as inverse of cartesian distances between the required out of lut lab coordinates and its nearest neighbours in lab color space. figure 12. construction of 3d printer color profile. blue balls represent adjacent points in look up table between lab and rgb color spaces, whereas red ball stands for out of lut point which rgb coordinates are to be established. performed analysis shows that suggested gamut mapping compresses the input color gamut which is usually larger to fit into the printer gamut by shifting out of gamut colors to the boundaries of the smaller gamut. additionally conducted ________________________________________________________________________________ geoinformatics ctu fce 2011 335 tests show that final color appearance is very much influenced by after printing treatment which includes hardening the surface with resin and drying up the model. the acceptable repeatability is very hard to achieve because the process is made by hand and it would require very skilful operator. these shows that differences in achieved color can be considerable and even an application of a color profile may not produce good results in the face of significant technological inaccuracy. 5. digitization results courtesy of central office of measures in warsaw three test objects were investigated in order to create their digital models and copies with the aid of 3d printing technology. the original objects are shown in figure 13, followed by table 1 where their overall characteristics are provided. they were made from different materials and have different surface finish. the wooden volume reference cup is lacquered, the occasional medal is polished and the iron weight reference has worn, rusty surface. figure 13. measured objects: a) wooden cup serving as an old volume reference; b) occasional medal; c) old weight reference. figure 14 shows virtual models of the digitized objects which were rendered in custom made software. the program allows for displaying color texture and reflectance properties as well as changing illumination and object‟s location with real time refreshing of its appearance. property object volume reference occasional medal weight reference material wood steel iron dimensions [mm] ř0×60×60 70×70×5 73×ř0×40 measurement directions 28 6 21 total points 0.8mln 1.55mln 4.78mln total triangles 90000 35000 60000 texture patches 16 4 16 texture resolution per patch 1024×1024 204ř×2048 1024×1024 table 1. measured objects properties. ________________________________________________________________________________ geoinformatics ctu fce 2011 336 figure 14. models of measured objects: a) volume reference cup; b) occasional medal; c) weight reference. the last step presented step concerns copying the measured objects. results of this procedure are shown in figure 15. however shape in all the cases is preserved well, the color reproduction is poor because of limited gamut of the 3d printer and complicated handmade post printing treatment of copies. these results show that improvements can be made in color processing procedures as well as in printing technology. figure 15. printed copies of the measured objects: a) volume reference cup; b) occasional medal; c) weight reference. 6. conclusions the integrated measurement system presented in this work facilitates digitization of cultural heritage objects. it performs automatic measurement of 3d shape, multispectral color and surface reflectivity with the aid of a single detector, therefore no manual data alignment is necessary. it can be applied to create precise digital models for the purpose of display or storing information about precious artefacts. moreover it utilizes a 3d printing technology to build copies of the measured objects for promotion or education. as a result of conducted research sample measurements outcome is provided to show the possibilities of presented technology. however shape and color measurement techniques are already well developed there are still issues concerning unbiased merging data from different directions, precision and repeatability of color measurement procedure and printing technology which allows for objects reproduction with limited precision. particularly 3d printer‟s color mapping is questionable. this gives a potential for further research, but even know it is very promising technology. ________________________________________________________________________________ geoinformatics ctu fce 2011 337 7. acknowlegdements this work was performed under the grant no. pl0097 financed by the norwegian financial mechanism and eea financial mechanism (2004-2009). 8. references [1] simon c. huxhagen u. mansouri a. heritage a.boochs f.marzani f.: integration of high resolution spatial and spectral data acquisition systems to provide complementary datasets for cultur al heritage applications, proceedings of spie, 7531(1), 2010, 75310l. [2] mansouri a.lathuiliere a. marzani f. voisin y. gouton p.: toward a 3d multispectral scanner: an application to multimedia, proceedings of ieee multimedia , 14(1), 2007, 40-47. [3] sitnik r. kujawinska m. woznicki j.: digital fringe projection system for large-volume 360-deg shape measurement, opt. eng., 41, 2002, 443-449. [4] osten w. nadeborn p. andrae p.: general hierarchical approach in absolute phase measurement, proceedings of spie, 2860, 1996, 2-13. [5] mączkowski g., sitnik r., krzesłowski j. integrated method for 3d shape and multispectral color measurement, j. imaging sci. technol. 55(3), 2011, 030502-(10). [6] phong b.t.: illumination for computer generated pictures, communications of the acm, 18, 1975, 311-317 [7] krzesłowski j. sitnik r. maczkowski g.: integrated three-dimensional shape and reflection properties measurement system, appl. opt., 50, 2011, 532-541. [8] besl p. mckay n.:, a method for registration of 3-d shapes, ieee transactions on pattern analysis and machine intelligence, 14, 1992, 239-256. [9] sitnik r. karaszewski m.: optimized point cloud triangulation for 3d scanning systems, machine graphics & vision 17, 2008, 349-371. [10] wyszecki g. stiles w. s.: color science: concepts and methods, quantitative data and formule, new york, john wiley&sons, 2000. [11] gibson i. rosen d. stucker b.: additive manufacturing technologies. rapid prototyping to direct digital manufacturing, new york, springer, 2010. usability testing of web mapping portals petr voldán department of mapping and cartography, petr.voldan@fsv.cvut.cz keywords: usability, web-design, map sites interface, user experience abstract this study presents a usability testing as method, which can be used to improve controlling of web map sites. study refers to the basic principles of this method and describes particular usability tests of mapping sites. in this paper are identified potential usability problems of web sites: amapy.cz, google maps and mapy.cz. the usability testing was focused on problems related with user interfaces, addresses searching and route planning of the map sites. introduction many people today use map portals for finding addresses, travel planning or finding points of interest (poi). their main purpose is not only the provision of maps, but also providing the various mapping tools or services. unfortunately a lot of people have problems to control map or to use the offered services. the reason can be the diversity of people attending the map page [6]. web portals also do not always offer the services that users want or there is time pressure as a factor that may lead to problems (situation when traffic or weather is rapidly changing) [10]. these all factors have a negative effect for each side. user can get information that looks differently and generally it could lead to dissatisfaction [8];[9]. this situation is not suitable for site owner. the web is the ultimate customer-empowering environment. it’s the user how decides everything, because it is so easy to go elsewhere. usability testing can help developers create high-quality (usability) user interface. quality interface provides users ”easy-to-use” system and users achieve their goals more effectively [7]. the purpose of this study is to present usability testing, and show the main problems of some czech mapping portals. usability testing people, who do not know anything about usability testing, often ask: what is the aim of testing? here are some benefits: � make sure that the product is easily understandable for users. geinformatics fce ctu 2010 57 voldán p.: usability testing of web mapping portals � people will be effective and satisfied with your product. � usability testing helps eliminate design problems. � ”testing one user is 100 better then testing none.” [3] tested portals three web map sited were evaluated in the study: � amapy.cz abbreviated in this paper as am, available at http://www.amapy.cz. amapy was the first czech professional web map portal [1]. � google maps abbreviated in this paper as gm, available at http://maps.google.com [2] � mapy.cz abbreviated in this paper as sm, available at http://www.mapy.cz. mapy.cz is one of the greatest czech map sites (240,000 entries per day) [4]. these sites offer standard an interactive 2d map with zooming and panning functions. additionally, there are features as finding location, addresses, businesses, route planning and many more. task scenario it was created task scenario same for all tested site which focused on the following areas: map controlling, searching places in an unknown location, route planning and on maps themselves. task scenario was a coherent story of a weekend visit. the aim is to create familiar situations with realistic reason for performing the task. the closer, that the scenario represents reality, the more reliable the test results [8]. the story itself had 4 parts and assumes that user live in or near prague. the text of the scenario as it was presented to participants: 1. your friend paul invited you to a weekend visit to his new apartment. in addition to invitations you were asked to pick up a carton of wine in paul favorite wine shop. � users got address of new apartment and name of wine shop without address. � this section was focused on finding unknown company and route planning 2. plan a walk trip to the pond ”velké dářko” � velké dářko is a pond placed 8 km north of paul’s apartment � the aim was to plan trip with using another maps (touristic, terrain, ortophoto) 3. determine whether there is any pool in paul’s city and note the route. 4. plan your route home including a visit of castle lipnice nad sázavou. � lipnice nad sázavou is castle 30 km from paul’s apartment � this section focused on route planning over transit point and finding poi. procedure geinformatics fce ctu 2010 58 http://www.amapy.cz http://maps.google.com http://www.mapy.cz voldán p.: usability testing of web mapping portals all tests were performed in normal office either a desktop pc or laptop. when author selected respondents, he tried to find wide range of users – in terms of age, gender or education (the words participant or user in the following text mean test users). testing all mapping sites participated in sixteen users (eight in testing of sm site and eight in testing of am and gm sites). about half of the participants were university students, both technical and nontechnical subjects and they were from twenty to thirty years old. the second part of participant was consisted of people older than thirty years until retirement age. these included a office-girl, a mechanical engineer, nurse or schoolmistress. it should be emphasized that none of the participants had cartographic or geodetic background. figure 1. scheme of the test room the operating system was windows xp, user could choose the web browser and users have used a mouse. task scenario was presented to each participant before start of the test. moderator assured of each participant that the user is not tested, but mapping sites. participants were asked to think aloud during the test and then the moderator read the tasks (think aloud is a technique where users are asked to speak about their thoughts, feelings). each session was about sixty minutes. after the execution of all tasks, moderator or observer discussed with the participant about test conduct, problems or feelings about the test. at the end of the session there was a brief discussion between the observer and moderator about the main problems. usability problems the list of positive or negative situations was obtained by analysis of observer’s notes or citations of the users. geinformatics fce ctu 2010 59 voldán p.: usability testing of web mapping portals controlling tested portal allows controlling of map windows (zooming, panning) using either graphics elements or using mouse gestures. in generally it is possible to say that one half of users used the full mouse gestures. the second group of users used a combination of a graphical interface and mouse gestures or only the graphical interface. method use [%] drag & drop 67 gui – crosshair 33 table 1a. control method – map pan method use [%] double click left mouse button 42 gui – slider 33 scroll wheel 25 table 1b. control method – zoom in method use [%] double click right mouse button 17 gui – slider 33 scroll wheel 50 table 1c. control method – zoom out am and sm allow zoom-in into rectangular region using combination ctrl key + left mouse button. this feature was not used during testing (gm also allows zoom-in into rectangular area, but still it is a laboratory function). searching a large number of queries in full-text search are addresses or locations. company location and even geographic places people are not searching on mapping sites but on the classic search engines (google, seznam.cz). there are map elements that the users know for example from the classic maps (restaurants, railway stations, bus stops, pools). these elements users usually search visually on the map (sm). big problem are two search fields on am site (figure 2.). users do not know which field to use and its leads to user confusion. in the case of am there is a problem enhanced by the fact that the main search box (the top one) is visually separated from the map window, and users can easily overlook it. jacob nielsen, guru on the field user experience, says the system should always keep users informed about what is going on [5]. unfortunately this rule is broken on am. there is big time delay before the system notifies the state ”searching”. this delay causes that users think that there is no search results and tray search again. search engine also has to handle different formats of addresses. am returned no results where the user entered an address in format for system nonstandard (street, postal code, city). geinformatics fce ctu 2010 60 voldán p.: usability testing of web mapping portals figure 2. the user doesn’t know what field have to use for searching (am) autocomplete feature has been positively evaluated by users (gm). ”it offers me . . . nice”, user citation about address autocompleting. next good feature for users were additional information in the search results (gm) – see figure 3. these images quickly help to decide about the relevance of search results. figure 3. additional information in the search results (gm) appropriate issue for further research is the level of zoom after searching addresses, cities and etc. from the tests generally follows that users behave differently in a situation where they search the points of interests in known location or in unknown location. in known location users zoom in after showing result. conversely in unknown location users zoom out after showing result. in both cases, the reason is checking of results of user’s search. geinformatics fce ctu 2010 61 voldán p.: usability testing of web mapping portals direction, route planning according to task scenario the participants carried out route planning during the test. the evaluation shows that when the users are planning the route, they are interesting in the following facts: 1. the users want to information about the part of the route, they don’t know. user citation: ”i catch the highway” 2. users are interested in the names of major cities on the route, which city they will turn and name of the next town on the route. important are the numbers of roads and numbers of highway exits. 3. detailing listing of streets and intersections is relevant to the users when they reach destination city/poi. from these points implies that the current itinerary of all portals are too detailed, users lose character information and in real situations are difficult to use itinerary for navigation. most of the users do not use the route planner, where the starting or destination point should be any poi. route planner is primarily used for getting direction between a particular city and address. in spite of the previous fact, mapping portals should allow to enter in search field as start or destination poi. this fact did not meet at am site. in the case of gm, there is no way how to end the route planner. after searching the direction, searched route remains displayed and the user can not cancel it. user citation: ”i don’t know how to cancel it”. on the other hand, there was no problem to add new transit point into the route for users (gm). surprisingly, participants also used the route planner in a small (for them unknown) town to short directions (around one kilometer distance). map types all mapping sites offer different types of maps – base map, orthophoto map, tourist map, etc. users generally prefer default (base) type map and they switch to other map type in a few cases. even in the situation, if the map switching could lead to faster execution theirs aims. generally, base maps of sm and am were positive evaluated by participants. the reason is probably a better cartographic language of sm and am sites. the added value of the sm and am are also the tourist maps – including hiking and biking paths (gm has not the tourist map – only the terrain map). one point of the task scenario directly encouraged to use tourist maps (walk trip to the pond velké dářko). anyhow for some users it was easier to work only with a basic map, because it is more synoptic. user citation: ”the touristic map has a lot of lines”. those users, who used the tourist map, were confused in some situations: tourist maps include map symbols which are not interactive – it is not possible click on them with aim to get more information (sm, am). users know that it is possible click on the map symbol on the basic map and thus they want also to click on the map symbol on the touristic map, but there are the map symbols only parts of the static images (figure 4.). user citation: ”i expect when i am placing the mouse cursor over a map symbol, it tells me something about symbol.” there was another issue with am site. am do not zoom-out to the level where can be seen all state. geinformatics fce ctu 2010 62 voldán p.: usability testing of web mapping portals figure 4. clickable and not clickable symbol look same the system should correspond to relationships in the real world [5]. on the sm site was disabled by default the tourist paths (figure 5.), but classical (printed) tourist map has highlighted tourist paths. this situation has led to unnecessary confusion for the user (this time is this feature enabled). figure 5. match between system and the real world other expierences this section describes other situations which were not included in any of the preceding chapters. positively assessed are the info windows that appear after clicking on the marker of the poi. these info windows serve as a good source of additional information. am does not allow the browser’s function ”back” – that was one of the major shortcomings of the am site. next issue is improper placement of application windows which can cause major problems. am shows settings panel so unsuitable that panel overlaps another buttons. figure 6 shown two states of am site. first one shows normal state and the second one the state where are the control buttons hidden under another window. user citation: ”also, it does not have a biking path”. user wanted to show the biking path, but the button for turning on of the biking path was hidden by another window. geinformatics fce ctu 2010 63 voldán p.: usability testing of web mapping portals figure 6. settings panel overlaps gui buttons conclusion in this study was presented the results of usability testing three map sites. for testing all sites was developed same task scenario. these tasks tried to respect the user’s real-life situations (finding addresses, route planning). this paper shows that through usability testing, you can find a critical point or, conversely, quality points, and work with these parts in further development. although some findings of fact may seem like small things, each such problem can lead to loss of users and may thus have a financial impact. author hope that described problems could serve as a guide in development of other web map services. identified problem also enables us to make a brief comparison of test sites – in terms of usability. table 2 shows summary of positives or negatives aspects of evaluated map sites. map site negative positive google maps 7 4 mapy.cz 10 2 amapy 16 2 table 2. summary of positives or negative reaction according to the number of problems have the best ratings sm and gm sites. the main advantages of the gm site are autocomplete function or images tips in results. on the contrary, the weakness of the gm is informational value of basic map. it is the clearness of base maps and lots of pois that have been positively evaluated in the case of sm site. the worst-rated is am site, which contains a lot of critical comments. the improving of usability of am site doesn’t have to be great changes. it should be a gradual evolution. in first step remove most significant problems (two search fields, function back). then conduct a new usability testing and repeat all process. paper also offers possible directions for further research, such as: level of zoom of the results after searching or simplified of itinerary of route planner. author thanks the company seznam.cz for the aid in testing. geinformatics fce ctu 2010 64 voldán p.: usability testing of web mapping portals references 1. amapy (2010), online at: http://www.amapy.cz 2. google maps (2010), online at: http://maps.google.com 3. krug, s., don’t make me think: a common sense approach to web usability, 2nd edition, new riders press, isbn 978-0321344755 4. mapy.cz (2010), online at: http://www.mapy.cz 5. nielsen, j., ten usability heuristics (2010), online at: http://www.useit.com/papers/heuristic/heuristic list.html 6. nivala, a.-m., brewster, s. and l.t. sarjakoski (2008) usability evaluation of web mapping sites, the cartographic journal vol. 45 no. 2 7. nivala, a.-m., brewster, s. and l.t. sarjakoski (2007). usability problems of web map sites. paper presented at the international cartographic conference, moscow 8. rubin, j., chisnell d., handbook of usability testing: howto plan, design, and conduct effective tests, 2nd edition, wiley, isbn 978-0470185483 9. wachowicz, m., vullings, w., bulens, j., groot, h. d., & broek, m. v. d. (2005). uncovering the main elements of geo-web usability. paper presented at the agile 2005 – 8th conference on geographic information science, lisboa, portugal. 10. wilkening, j. (2010). maps users’ preferences and performance under time pressure. giscience 2010: sixth international conference on geographic information science. zurich. geinformatics fce ctu 2010 65 http://www.amapy.cz http://maps.google.com http://www.mapy.cz http://www.useit.com/papers/heuristic/heuristic_list.html object based and pixel based classification using rapideye satellite imagery of eti-osa, lagos, nigeria object based and pixel based classification using rapideye satellite imagery of eti-osa, lagos, nigeria e. o. makindea, a. t. salamib, j. b. olaleyea, o. c. okewusia adepartment of surveying and geoinformatics, faculty of engineering, university of lagos, akoka, lagos, nigeria b space applications and environmental science laboratory, institute of ecology and environmental studies, faculty of science, obafemi awolowo university, ile-ife, osun, nigeria estherdanisi@gmail.com, ayobasalami@yahoo.com, jbolaleye@yahoo.com, pelcool55@yahoo.com abstract several studies have been carried out to find an appropriate method to classify the remote sensing data. traditional classification approaches are all pixel-based, and do not utilize the spatial information within an object which is an important source of information to image classification. thus, this study compared the pixelbased and object-based classification algorithms using rapideye satellite image of eti-osa lga, lagos. in the object-oriented approach, the image was segmented to homogenous areas by suitable parameters such as a scale parameter, compactness, shape etc. classification based on segments was done by a nearest neighbour classifier. in the pixel-based classification, the spectral angle mapper was used to classify the images. the user accuracy for each class using object-based classification were 98.31% for water body, 92.31% for vegetation, 86.67% for bare soil and 90.57% for built up areas while the user accuracy for the pixel-based classification were 98.28% for water body, 84.06% for vegetation 86.36% and 79.41% for built up areas. these classification techniques were subjected to accuracy assessment and the overall accuracy of the object-based classification was 94.47%, while that of pixel-based classification yielded 86.64%. the results of classification and its accuracy assessment show that the object-based approach gave more accurate and satisfying results. keywords: rapideye satellite image; pixel-based classification; object-based classification. introduction according to the findings of [2], geospatial specialists have theorized the possibility of developing a fully automated classification procedure that would be an improvement over pixel-based procedures. pixel-based procedures analyse the spectral properties of every pixel within an area of interest, without taking into account the spatial or contextual information related to geoinformatics fce ctu 15(2), 2016, doi:10.14311/gi.15.2.5 59 https://doi.org/10.14311/gi.15.2.5 http://creativecommons.org/licenses/by/4.0/ e. o. makinde et al.: object based and pixel based classification the pixel of interest. since higher resolution satellite imagery is available, it could be used to produce very accurate classifications [13]. researchers have generally observed that when pixel-based methods are applied to high-resolution satellite images a “salt and pepper” effect was produced that contributed to the inaccuracy of the classification [4]. thus object-based classification seems to produce better results when applied to higher resolutions. there exist computer software packages such as ecognition and feature analyst that have been developed to utilize object-based classification procedures. these packages analyse both the spectral and spatial/contextual properties of pixels and use a segmentation process and iterative learning algorithms to achieve a semi-automatic classification procedure that promises to be more accurate than traditional pixel-based methods [3]. the concept of object-based image analysis as an alternative to pixel-based analysis was introduced in 1970s [11]. the initial practical application was towards automation of linear feature extraction. in addition to the limitation from hardware, software, poor resolution of images and interpretation theories, the early application of object-based image analysis faced obstacles in information fusing, classification validation, reasonable efficiency attaining, and analysis automation [9]. since the mid-1990s, hardware capability has increased dramatically and high spatial resolution images [9] with increased spectral variability became available. pixel-based image classification encountered serious problems in dealing with high spatial resolution images and thus the demand for object-based image analysis has increased [11]. object-based image analysis works on objects instead of single pixels. the idea to classify objects stems from the fact that most image data exhibit characteristic texture features which are neglected in conventional classifications. in the early development stage of object-based image analysis, objects were extracted from pre-defined boundaries, and the following classifications based on those extracted objects exhibited results with higher accuracy, comparing with those by pixel-based methods [7]. this technique classifying objects extracted from predefined boundaries is applicable for agriculture plots or other land cover classes with clear boundaries, while it is not suitable to the areas with no boundaries readily available, such as semi-natural areas. image segmentation is the solution for obtaining objects in areas without pre-defined boundaries. it is a preliminary step in object-based image analysis. since image classification results are essential for decision making, the methods employed in deriving this results needs to be investigated. in nigeria, the image classification technique being used is the pixel-based. object-based method of image classification has not been explored in nigeria before now. probably because of its cost which makes it difficult for an individual and sometimes even for a cooperate entities to purchase the necessary data and software tools. however, within this study high resolution satellite imagery (rapideye, 5 m resolution) was acquired. the object-based and the pixel-based classification were performed and they results compared. material and methods the study area eti-osa is a local government area (lga) in the lagos division of lagos state, nigeria located within 6°26′n, 6°28′n and 3°26′e, 3°32′e. eti-osa lga maintains its eastern boundary with ibeju-lekki lga and its western boundary with lagos island lga where the eti-osa geoinformatics fce ctu 15(2), 2016 60 e. o. makinde et al.: object based and pixel based classification lga was created from and was known then as the lagos city council. it also has its northern boundary with the lagoon and its southern boundary with the atlantic ocean. eti-osa lga has a population of 283,791, which represents 3.11% of the state’s population. 158,858 of the total population are male while the remaining 124,933 are female. image data rapideye satellite imagery data acquired in 2009 covering part of eti-osa lga, lagos state, nigeria was procured. the sensor type used in acquiring this imagery is the multi-spectral push broom imager and is captures five spectral bands. these are: blue (440 – 510nm), green (520 – 590nm), red (600 – 700nm), red-edge (690-730nm) and near-infrared bands (760 – 850nm). it also has a panchromatic band of 1m. the ground sampling distance at nadir is 6.5 m and the orthorectified pixel size is 5 m with a swath width of 77 km [6]. ground co-ordinates of points within the study area were obtained using handheld gps receiver, and were used to both facilitate classification and carry out accuracy assessment. data processing erdas imagine 2014 software was used in the pre-processing, pixel-based classification, and post processing of the rapideye satellite imagery covering the study area. for the pixel-based classification, the satellite imagery was classified by pixel-based spectral angle mapper (sam) classifier. the signature file was generated and this involves the training of classes. aoi (areas of interest) was created and used to train the land cover classes (waterbody, bare-soil, vegetation and built-up) for every class, random samples were taken across the study area based on pixel spectra. the sam algorithm which is a supervised approach was then applied. the spectral angle mapper (sam) algorithm is based on the assumption that a single pixel of remote sensing images represents one certain ground cover material, which can be uniquely assigned to only one ground cover class. this algorithm is based on the measurement of the spectral similarity between two spectra. the spectral similarity can be obtained by considering each spectrum as a vector in q -dimensional space, where q is the number of bands [15, 16]. the ecognition developer was used for the object-based classification of the rapideye satellite imagery. the extracted individual bands of the rapideye scene acquired were stacked together into a single multispectral image using erdas imagine. arcgis 10.1 was used to extract the shapefile of the study area from the digitized administrative map of lagos state and to produce the land cover map of the study area. the boundary shape file (.shp) of eti-osa lga was converted to an area of interest file (.aoi) which was used in sub-setting or clipping the stacked multispectral rapideye imageries. for the object-based image classification, the image was divided into objects serving as building blocks for further analysis using the multi resolution segmentation algorithm in ecognition software [1]. the segmentation was performed to group contiguous pixels into areas or segments that are homogenous and the following criteria were used: scale: 450 shape: 0.3 and compactness: 0.5. a pair of neighbouring image objects was merged into one large object. this decision is made with local homogeneity attributes and can be defined by equation 1 [18]. geoinformatics fce ctu 15(2), 2016 61 e. o. makinde et al.: object based and pixel based classification f = i∑ i=1 wi(nmerge σmerge − (nobj1 σobj1 + nobj2 σobj2 )) (1) where n is the number of bands and wi is the weight for the current band, nmerge, nobj1 and nobj2 are respectively the number of pixels within merged object, initial object 1, and initial object 2. symbols σmerge, σobj1 , σobj2 are the variances of merged object, initial object 1, and initial object 2 is the derived local tone heterogeneity weighted by the size of image objects and summed over n image bands. once an image had been segmented, it was then classified at the segment level which is termed object-based classification. the criteria or attributes mentioned above were used to label the objects and were used further in the object-based nearest neighbour (nn) classification. it is a supervised classification technique that classified all objects in the entire image based on the selected samples and the defined statistics. accuracy assessment the results of the pixel-based and the object-based classification of the rapideye image were compared and their accuracy was assessed using 250 randomly generated reference points for the image. the reference data were derived from the panchromatic band of the rapideye image for the study area. then error matrices were generated and the assessment indices are derived, including the producer’s accuracy, the user’s accuracy, and the kappa statistics. to determine if the two classifications were significantly different at (α = 0.05), a kappa analysis and pair-wise z-test were computed [5, 19]. k̂ = p0 −pc 1 −pc (2) z = |k̂1 − k̂2|√ var(k̂1) + var(k̂2) (3) where po represents actual agreement which is simply the number of instances that were classified correctly throughout the entire error matrix, pc represents “chance agreement”, which is the accuracy the classifier would be expected to achieve based on the error matrix. pc is directly related to the number of each class, along with the number of instances that the classifier agreed with the ground truth class and k̂1, k̂2 represents the kappa coefficients for the two classifications, respectively. the kappa coefficient is a measure of the agreement between observed and predicted values and whether that agreement is by chance [19]. results and analysis a. rapideye colour composite imageries figure 1 shows the clipped rapideye imagery of eti-osa lga using the standard “true colour” composite– bands 3, 2 and 1. because the visible bands are used in this combination, ground features appear in colours similar to their appearance to the human visual system, healthy vegetation is green, roads are grey, and shorelines are white. geoinformatics fce ctu 15(2), 2016 62 e. o. makinde et al.: object based and pixel based classification figure 1: composite of 2009 rapideye satellite imagery b. land cover maps the land cover maps of the study area produced for the different classification types are shown in figures 2 and 3. water bodies within the study area were depicted in colour blue while vegetation cover within the study area was depicted with green. the classes which are depicted in colour red represent built-up and bare soil is depicted with grey within eti-osa, lagos. c. accuracy assessment the diagonal elements of the error matrix indicate the correctly classified pixels, while the off diagonal elements of the matrix indicate the wrongly classified pixels based on the comparison of the panchromatic band of the image, data derived from the field and the classified image. table 1, gives the meaning of the code used in the subsequent tables. table 1: code used in accuracy report tables code meaning wb water body vg vegetation bs bare soil bu built-up geoinformatics fce ctu 15(2), 2016 63 e. o. makinde et al.: object based and pixel based classification figure 2: land cover map of the study area (pixel-based classification) figure 3: land cover map of the study area (object-based classification) geoinformatics fce ctu 15(2), 2016 64 e. o. makinde et al.: object based and pixel based classification d. accuracy report the results of the pixel-based and the object-oriented classification of the rapideye image are compared by accuracy assessment. a total of the 217 samples were selected randomly for assessment. “known” pixels from ground trothing were identified on the panchromatic band used as the reference data. then an error matrix was generated and the assessment indices are given on tables 2-3, including the producer’s accuracy, the user’s accuracy, and the kappa statistics. an accuracy assessment was also performed on the object-based classification results. the best classification result shows statistics of the training these statistics allow one to compare which classes have been best classified. the result showed that water bodies had the highest accuracy for object-based and pixel-based classification. table 2: error matrix and accuracy report (pixel-based classification) reference data classified wb vg bs bu total producer usersdata accuracy accuracy wb 57 0 1 0 58 100% 98.28% vg 0 58 0 11 69 89.23% 84.06% bs 0 0 19 3 22 70.37% 86.36% bu 0 7 7 54 68 79.41% 79.41% total 57 65 27 68 217 overall classifcation accuracy = 86.64% table 3: error matrix and accuracy report (object-based classification) reference data classified wb vg bs bu total producer usersdata accuracy accuracy wb 58 1 0 0 59 100% 98.31% vg 1 60 3 1 65 95.24% 92.31% bs 0 0 39 1 40 97.50% 86.67% bu 0 2 3 48 53 96.00% 90.57% total 59 63 45 50 217 overall classifcation accuracy = 94.47% kappa is a discrete multivariate technique that tests whether one data set is significantly different from another. it is used to test whether two error matrices are significantly different [5]. the two error matrices can be from different classifications, as might be the case when conducting change detection, or kappa may be used on only one error matrix by comparing that error matrix to a hypothetical completely random error matrix. in other words, kappa’s associated test statistic khat tests how a classification performed relative to a hypothetical completely randomly determined classification. an important property of kappa is that it uses the information contained in all of the cells of the error matrix, rather than only the diagonal elements, to estimate the accuracy of the classification [10]. the khat statistic ranges from 0 to 1. a khat value of 0.75 means that the classification accounts for 75% more of the variation in the data than would a hypothetical completely random classification. geoinformatics fce ctu 15(2), 2016 65 e. o. makinde et al.: object based and pixel based classification a general framework for interpreting khat values was introduced by [10, 14]. they recommended that khat values greater than 0.8 represent strong agreement, values between 0.4 and 0.8 represent moderate agreement, and values below 0.4 represent poor agreement [10]. the tables below show the kappa statistics for the two methods of classification employed. table 4: kappa statistics (pixel-based classification) class name kappa waterbody 0.9766 vegetation 0.7724 bare soil 0.8443 built-up 0.8443 overall kappa statistics = 0.8153 table 5: kappa statistics (object-based classification) class name kappa waterbody 0.9842 vegetation 0.8200 bare soil 0.8962 built-up 0.9235 overall kappa statistics = 0.8674 comparison of pixel-based and object-based classification from the tables, it can be seen that the object-oriented classification produced more accurate results, the overall accuracy are 7.83% more than the pixel-based classification. moreover, in the case of the pixel-based classification due to utilization of only spectral information of pixels in image data, the results looks like pepper-and-salt picture. i. representation of land cover classes by pixels the table 6 is a matrix of the number of pixels that were classified per land cover class for each of the two methods. in this table, the numbers of pixel belonging to the same class of classified were compared. table 6: matrix of classified pixels number of pixels land cover class pixel-based object-based bare soil 7007 1290 built-up 24984 6260 vegetation 25534 3910 water body 17936 1380 geoinformatics fce ctu 15(2), 2016 66 e. o. makinde et al.: object based and pixel based classification ii. similarities and differences based on the classified pixels table 6 shows that the comparison between pixel-based and object-based classification is possible and that the results of the two classifiers follow a general trend. however, the results from the object-based classification show rather low number of classified pixels; this is because in the case of the object-based classification, pixels have been grouped in the process of segmentation into objects. table 6 indicates that dominant land cover within the study area is the vegetation land cover. this is followed by built-up land cover class. these results are clearly displayed by both the object-based and the pixel-based classification; however, in all cases the pixel-based classification identified more pixels than the object-based classification. discussion pixel-based and object-based image classification methods have their own advantages and disadvantages depending upon their area of application and most importantly the remote sensing datasets that are used for information extraction [9]. traditional pixel-based classification makes use of combined spectral responses from all training pixels for a given class. hence, the resulting signature comprises responses from a group of different land covers in the training samples. thus, the classification system for pixel-based ignores the effect of mixed pixels [12]. however, the object-based classification uses the nearest neighbour classification (nn classification) technique because intelligent image objects are used with multi-resolution segmentation in combination with supervised classification. pixel-based classification approach has many disadvantages when compared to object-based classification, especially in high resolution satellite data processing. though proved to be highly successful with low to moderate spatial resolution data, pixel-based classification produces quite a lot unsatisfactory classification accuracy results with high resolution images. the use of spatial information from neighborhood or adjacent pixels remains a critical drawback to pixel-based image classification. object-based classification approach covers the drawbacks of pixel-based classification approach and results in outstanding classification accuracies (7.83% higher overall accuracy than the pixel-based approach in our test). this is consistent with other studies that have shown object-based methods performs better than the pixel-based methods when applied to high resolution satellite images [8, 17]. the object-based approach provided a significantly higher user’s accuracy in the built up land cover category with an increase of 11.16%. this was largely due to the better differentiation between the built up class and vegetation class using the object-based approach [13]. the bare soil land cover class yielded similar accuracies using both the pixel-based and object-based approaches, demonstrating that both types of classification methods may be beneficial to land managers and researchers interested in studying them. object-based classification can use not only spectral information of land types, but also use pixels’ spatial position, shape characteristics, texture parameters and the relationship between contexts, which effectively avoid the “salt & pepper phenomenon” and greatly improve the accuracy of classification. after undertaking adequate literature survey, it can be observed that for high resolution satellite image classification, object-based classification approach is considered the most suitable approach by most of the researchers as compared to pixel-based classification. the traditional pixel-based classification cannot make the best use of the relationship between pixel geoinformatics fce ctu 15(2), 2016 67 e. o. makinde et al.: object based and pixel based classification and pixels around it, which makes the classification results, become incoherent. in almost all the case studies, object-based classification approach resulted in greater accuracy ranging from 84% to 89% (approximately). conclusion in this research, pixel-based and object-based image classification was performed on rapideye satellite imagery with a 6.5m spatial resolution. the image was classified by pixel-based spectral angle mapper classifier, and object-based nearest neighbour classifier, respectively. accuracy assessment results showed that object-based image classification obtained higher accuracy than pixel-based classification. this study showed that the object-based image classification has advantage over the pixel-based classification for high spatial resolution images. the object-based method is recommended as an image classification method for high resolution images given its superiority in terms of appearance and statistical accuracy as compared to the pixel-based method. this report has only investigated the object-based method of image classification on rapideye satellite image; it has not made an assessment on other high resolution images. it is therefore recommended that other high resolution imageries should be assessed. acknowledgements we appreciate prof. ayobami t. salami, the head of space applications and environmental science laboratory (spael), obafemi awolowo university, ile-ife, osun state, nigeria for the imagery. references [1] martin baatz and arno schäpe. “object-oriented and multi-scale image analysis in semantic networks”. in: 2nd international symposium: operationalization of remote sensing. vol. 16. 20. 1999, pp. 7–13. [2] thomas blaschke et al. “object-oriented image processing in an integrated gis/remote sensing environment and perspectives for environmental applications”. in: environmental information for planning, politics and the public 2 (2000), pp. 555–570. [3] j. s. blundell and d. w. opitz. “object recognition and feature extraction from imagery: the feature analyst® approach”. in: international archives of photogrammetry, remote sensing and spatial information sciences 36.4 (2006), p. c42. [4] manuel lameiras campagnolo and j. orestes cerdeira. “contextual classification of remotely sensed images with integer linear programming”. in: compimage. 2006, pp. 123–128. [5] russell g. congalton and kass green. assessing the accuracy of remotely sensed data: principles and practices. boca raton, florida: crc press, 1999, p. 137. [6] satellite imaging corporation. rapideye satellite sensor. 2015. url: http : / / www . satimagingcorp.com/satellite-sensors/other-satellite-sensors/rapideye/. [7] a. m dean and g. m. smith. “an evaluation of per-parcel land cover mapping using maximum likelihood class probabilities”. in: international journal of remote sensing 24.14 (2003), pp. 2905–2920. doi: 10.1080/01431160210155910. geoinformatics fce ctu 15(2), 2016 68 http://www.satimagingcorp.com/satellite-sensors/other-satellite-sensors/rapideye/ http://www.satimagingcorp.com/satellite-sensors/other-satellite-sensors/rapideye/ https://doi.org/10.1080/01431160210155910 e. o. makinde et al.: object based and pixel based classification [8] donald i. m. enderle and robert c. weih jr. “integrating supervised and unsupervised classification methods to develop a more accurate land cover classification”. in: journal of the arkansas academy of science 59 (2005), pp. 65–73. [9] david flanders, mryka hall-beyer, and joan pereverzoff. “preliminary evaluation of ecognition object-based software for cut block delineation and feature extraction”. in: canadian journal of remote sensing 29.4 (2003), pp. 441–452. doi: 10.5589/m03-006. [10] giles m. foody. “status of land cover classification accuracy assessment”. in: remote sensing of environment 80.1 (2002), pp. 185–201. [11] yan gao and jean francois mas. “a comparison of the performance of pixel-based and object-based classifications over images with various spatial resolutions”. in: online journal of earth sciences 2.1 (2008), pp. 27–35. url: http://docsdrive.com/pdfs/ medwelljournals/ojesci/2008/27-35.pdf. [12] yan gao and jean francois mas. “a comparison of the performance of pixel-based and object-based classifications over images with various spatial resolutions”. in: international archives of photogrammetry, remote sensing and spatial information sciences xxxviii-4/c1 (2008). [13] steven m. de jong, tom hornstra, and hans-gerd maas. “an integrated spatial and spectral approach to the classification of mediterranean land cover types: the ssc method”. in: international journal of applied earth observation and geoinformation 3.2 (2001), pp. 176–183. [14] j. richard landis and gary g. koch. “the measurement of observer agreement for categorical data”. in: biometrics (1977), pp. 159–174. [15] xiaofang liu and chun yang. “a kernel spectral angle mapper algorithm for remote sensing image classification”. in: image and signal processing (cisp), 2013 6th international congress on. vol. 2. ieee. 2013, pp. 814–818. [16] s. rashmi, s. addamani, and s. ravikiran. “spectral angle mapper algorithm for remote sensing image classification”. in: ijiset–international journal of innovative science, engineering & technology 50.4 (2014), pp. 201–205. [17] n. d. riggan jr. and r. c. weih jr. “a comparison of pixel-based versus object-based land use/land cover classification methodologies”. in: journal of the arkansas academy of science 63 (2009), pp. 145–152. [18] l. wang, w. p. sousa, and p. gong. “integration of object-based and pixel-based classification for mapping mangroves with ikonos imagery”. in: international journal of remote sensing 25.24 (2004), pp. 5655–5668. [19] jerrold b. zar. biostatistical analysis. 4th. upper saddle river, new jersey: prentice hall, 2007, p. 663. geoinformatics fce ctu 15(2), 2016 69 https://doi.org/10.5589/m03-006 http://docsdrive.com/pdfs/medwelljournals/ojesci/2008/27-35.pdf http://docsdrive.com/pdfs/medwelljournals/ojesci/2008/27-35.pdf geoinformatics fce ctu 15(2), 2016 70 e. o. makinde et al.: object based and pixel based classification _______________________________________________________________________________________ geoinformatics ctu fce 2011 178 recommendations and strategies for the establishment of a guideline for monument documentation harmonized with the existing european standards and codes anastasia kioussi1, kyriakos labropoulos1, maria karoglou1, antonia moropoulou1, roko zarnic2 1national technical university of athens, school of chemical engineering 9 iroon polytechniou str. gr15870, zografou campus, athens, greece nasiak@central.ntua.gr 2university of ljublijana, faculty of civil and geodetic engineering jamova cesta 2, si-1000 ljubljana, slovenia keywords: cultural heritage, monument documentation, documentation, risk assessment indicators abstract: information on current state of immovable cultural heritage is important for specifying measures necessary to preserve the heritage in an appropriate condition and ensure that the maintenance required to keep it at this level is well defined. in this framework, eu-chic project aims to set-up a system introducing a concept of the “cultural heritage identity card”, which will develop into a systematic collection and storage of data on immovable heritage objects across european and neighboring countries. this work supports sustainable maintenance, preservation and revitalization of historic sites and monuments. this is achieved through the development of a guideline for the assessment of efficient documentation systems that identify the parameters needed for the characterisation of the preservation state of a monument and its possible alterations during its entire lifetime. in order to develop and test the recommendations for efficient compilation of the data pertinent to each monument under observation, the development of criteria, indicators and protocols as part of a common methodology that encourages the exchange of document between european countries is initiated. the criteria encompass all potential factors affecting the building structure, the non-structural elements, the architectural value and any other aspects ranging from the functionality of the monument/building, to its historic value. this has been achieved through an integrated survey of existing documentation protocols in the field of cultural her itage protection, and through implementation of recommendations about criteria for harmonizing these protocols, both which provide a new documentation methodology. this new methodology is an upgrade of current documentation methodologies, and responds to criteria and indicators for risk assessment and the technology state of diagnostics and data management. a guideline will provide the essential document for further development of european policies for the traceability of cultural assets and harmonization of criteria for the future maintenance of european cultural heritage. 1. introduction cultural heritage protection is a multidisciplinary field that relies heavily on data compilation and processing. in order to support the continual process of sustainable maintenance, preservation and revitalization of historic sites and monuments, there exists a pressing need to collect and record reliable data on european cultural heritage. however, this is often difficult to achieve. in this framework, the eu-chic project has been initiated and aims to set-up a system introducing the concept of the “cultural heritage identity card”, which will develop into a systematic collection and storage of data on immovable heritage objects across european and neighbouring countries. recommendations for the assessment and application of efficient systems are developed that identify the parameters needed for the characterisation of the preservation state of a monument and its possible alterations during its entire lifetime. such a concept is expected to have a significant cost benefit for cultural heritage owners and managers by using common parameters, and will increase the level of professional know-how in order to minimize the detrimental impact of lack of knowledge and expertise. based on previous work and experience in the field by the national technical university of athens [1, 2], recommendations and strategies for monuments‟ documentation, harmonized with the existing european standards and codes are developed. in order to develop and test the recommendations needed for the efficient compilation of the data pertinent to each monument under observation, the first part of the work focuses on the _______________________________________________________________________________________ geoinformatics ctu fce 2011 179 development of criteria, indicators and protocols as part of a common methodology that encourages the exchange of document between european countries. the criteria encompass all potential factors affecting the building structure, the non-structural elements, the architectural value and any other aspects ranging from the functionality of the monument/building, to its historic value. this is achieved through an integrated survey of existing documentation protocols in the field of cultural heritage protection, and through implementation of recommendations about criteria for harmonizing these protocols, both which provide a new documentation methodology. this new methodology is an upgrade of current documentation methodologies, and responds to criteria and indicators for risk assessment and the technology state of diagnostics and data management. 2. state of the art 2.1 survey of existing documentation protocols there exists no established universal documentation protocol/system for cultural heritage. this has also been identified by cost c5 action which concluded that there are great variations in the systems of establishing and evaluating data from buildings in the european countries. in general, the responsibility for collecting data depends on the administrative structure in each country. as part of the development of european-level integrated documentation protocols, a survey of existing documentation protocols was performed to assess the current state-of-the-art in this field. the eu-chic project partners [3] studied twenty-three information systems from eleven european countries (belgium, czech republic, germany, greece, italy, luxemburg, malta, poland, portugal, slovenia, spain) and israel. the following table presents a brief description of the systems analysed. geographic location of usage documentation protocol belgium vioe – vlaams instituut voor het onroerend erfgoed [4] database of cultural heritage in the brussels region [5] database of the cultural heritage in the walloon region [6] monumentenwacht vlaanderen [7] cities of bruges and antwerp: inspection of the buildings owned by the heritage and others – aiming at the maintenance of these buildings ministry of education of the flemish government: methodology for the inspection and evaluation of the condition and the maintenance of school buildings czech republic integrovaný informační systém památkové péče (iispp) [8 germany adabweb – allgemeine denkmaldatenbank [9] greece national archive of monuments information system (polemon) [10] ministry of culture / directorate of byzantine and postbyzantine monuments: archimed risk map of cultural heritage and mapping and description of cultural landscape ministry of culture: technical reports for museum interventions, extensions, upgrades or new buildings acropolis restoration service (ysma) [11] israel site card italy sistema informativo generale per il catalogo (sigec) [12] carta del rischio [13] luxemburg inventory of the cultural heritage in the grand-duchy of luxemburg (buildings and landscapes) [14] malta compilation of data inventory cards national protective inventory [15] poland karta cmentarza 1. krajowa ewidencja zabytków, 2. krajowy rejestr zabytków portugal igespar pt slovenia cultural heritage register [16] spain ficha de patrimonio etnológico en castilla y leon inventario de patrimonio industrial de la provincia de valladolid table 1: surveyed documentation protocols _______________________________________________________________________________________ geoinformatics ctu fce 2011 180 a detailed study of these protocols has been done to collect a meaningful sample of the existing information on cultural heritage protocols. each one of these protocols follows unique procedures. however, these methodologies have been compared in order to draw conclusions concerning the best way to develop a hypothetically optimal procedure. this study focused on issues relating to the preservation and sustainability of cultural heritage, such as location of the building, monitoring processes, management, current state of preservation, materials and intervention techniques applied in the past, all aiming to document the complete history of the monument. the systems described in table 1 are a compilation of two types of protocols: documentation systems and risk assessment systems. some protocols may belong to both types. 2.2 other information systems data about cultural heritage are being collected, managed and presented by many different bodies with varying purpose, range of coverage and level of details. international perspective is important in research and development, coordination of activities and standardisation. the international view and safeguarding approach is interesting in comparison to the current administrative approach of the national information systems and databases. some of the well recognised approaches are the core data index to historic buildings and monuments of the architectural heritage [17], the unesco‟s world heritage list [1ř], the european heritage network [1ř], the council of europe [20], icomos [21], the getty conservation institute [22] and recordim [23]. 2.3 european policies, standards and directives furthermore any recommendation and strategies developed for the establishment of a monument documentation guideline should conform to existing european policies, standards and directives. tables 2 and 3 summarize respectively the main ec policies and directives, among those applicable to cultural heritage protection and management, with an impact on documentation. in addition to the above policies and directives, consideration should be given to the standard cen/tc 346. european committee for standardization (cen) is a major provider of european standards and technical specifications [24]. it is the only recognized european organization according to directive 98/34/ec for the planning, drafting and adoption of european standards in almost all areas of economic activity. cen's national members work together to develop voluntary european standards (ens). the standard cen/tc 346 has been recently submitted to the cen enquiry and concerns “conservation of cultural property – condition survey of immovable heritage”. as such it can also be considered as a guideline for a common and standardized procedure to describe the condition state of built heritage. the technical committee cen/tc 346 aims to develop a european standard that gives guidelines for a condition survey of an immovable cultural heritage object. it states how an immovable cultural heritage object should be registered, examined, documented and reported on. 3. recommendations and strategies the survey on existing information systems at national, european and international level, as well as the detailed review of the state of the art on methodology and directives currently employed and applied in the field of documentation, both help to identify the most common and effective methods and tools for collection of data related to monument documentation. in addition, the critical issues for developing recommendations and strategies for directives, relating to cultural heritage protection, management and decision making are identified allowing the necessary adjustments to specific needs. the results are presented in table 4. in order for any new guideline for monument documentation to be effective and widely applicable, not only it needs to be harmonized with existing european standards and codes but also, most importantly, be able to cater for the variety and the particularity of cultural heritage. this is achieved by selecting and integrating common criteria that formulate a dynamic archive, incorporating and supplying with information on the monument, during its entire life-time. the vital stage is the inclusion of all existing data concerning special building documentation, materials and building's structure, environmental factors, degradation mechanisms, diagnosis techniques and methods and intervention works as described above. a detailed description of the categories and subcategories presented in table 1 will be included in the final report of the eu-chic project [3] pending feedback from the network of researchers (experts from all over europe dealing with documentation protocols used for cultural heritage protection) and the advisory network (representatives of national authorities established in european countries, dealing with cultural heritage protection). it should be noted that documentation criteria and risk indicators identified within the existing protocols are focusing mainly on the macroscale of the monuments. in fact, they should not be limited to simply record information and risks associated with environmental dangers, human impact and natural hazards affecting the static/ structural state of the monument, but should include other factors such as the conservation state of the materials (i.e. not only the static/structural aspects of the building), the importance and distribution of cultural heritage, the impact factor of the hazards present, various socioeconomic parameters etc. obviously, these factors cover different scales of the problem. in particular, there is a correlation between decay and damage of materials that often leads to the monuments pathology. since the materials‟ state of conservation depends on their physicochemical and physicomechanical parameters and the materials‟ behavior in a corrosive environment is not _______________________________________________________________________________________ geoinformatics ctu fce 2011 181 generalized, the risk assessment should be dealt in the direction of revealing the specific active decay mechanism with an integrated decay study both in mesoscale [type of decay (morphology)] and microscale [kinetics of the phenomenon (decay rate) and thermodynamics of the phenomenon (susceptibility to decay)] level, through a standardized diagnostic study methodology [37, 38]. ec policies remarks cultural heritage article 151 – ec treaty (maastricht (1992)): contribute to the flowering of the cultures of the member states, while respecting their national and regional diversity and at the same time bringing the common ch to the fore. preservation and enhancement of ch [25] the importance of future refurbishment, rehabilitation and maintenance of the built heritage with regards to local and regional development article 17 of the convention for protection of cultural heritage in europe: investment in research in cultural heritage [26] reinforcement of research at european level and provision of appropriated inputs to establish effective & compatible restoration and conservation rules, by establishing a system to catalogue the cultural heritage assets and their elements, and to establish authenticity and historic tracks of the european cultural heritage london declaration for improving ch research: an initiative to protect and safeguard european cultural heritage through scientific and technological research [27] working paper – stoa unit: technological requirements for solutions in the conservation and protection of historical monuments and archaeological remains [28] council resolution, oj c 32, 2002, 21/01/2002: the role of culture in the development of the european union [29] ch as a valuable asset for socioeconomic development of europe council resolution, oj c 136, 2003, 26/05/2003: the horizontal aspects of culture: increasing synergies with other sectors and exchange good practices in social and economic dimensions [30] multidisciplinary approach to consider all the aspects related to cultural heritage, including socio-economic aspects. it is linked to the construction field through the european construction technology platform (ectp) sustainability and environmental policies water framework directive : civil protection to be taken in relation to cultural heritage [31] proper identity card system to protect valuable assets. ec cafe initiative (clean air for europe): the effects of air pollution on cultural heritage. protect and improve the built environment and cultural heritage, and promote biodiversity. [32] promoting integrity of building envelop and surrounding environment. treaty of amsterdam (1997) & brussels european council (2003): the development that meets the needs of the present without compromising the ability of future generations to meet their own needs [33] the implementation of good practices to promote sustainable conservation of cultural heritage and proper adaptation to new needs. eia directive (97/11/ec) amending (85/337/eec): assessment of the environmental effects of those public and private projects which are likely to have significant effects on the environment [34] the interaction of cultural heritage with the environment, including natural landscapes. environmental technologies action plan: stimulating technologies for sustainable development [35] introduction of high measuring technologies applicable to cultural heritage tourism article 3(1)u of the treaty of maastricht: measures in the sphere of tourism [36] development of sustainable and compatible exploitation of cultural heritage, with tourism being a major issue. table 2: ec policies considered for the development of harmonization criteria for the guideline for monument documentation _______________________________________________________________________________________ geoinformatics ctu fce 2011 182 directive remarks construction products 89/106/eec requires standardisation of construction products. this is a threat to some traditional building materials and traditional conservation methods. energy efficiency 93/76/eec requires application of ventilation in old buildings. general indoor climate requirements are hard to fulfill for old buildings without also affecting the cultural value. environmental impact assessment 85/337/eec assessing certain public and private projects on the environment. controversial when related to mixed areas of cultural and natural heritage. lifts 95/16/eec concerning lifts permanently in service. requirements for accessibility of disabled persons can be a problem fulfilling in protected buildings without also affecting authenticity and cultural value. natural habitats 92/43/eec aiming to protect biodiversity. one consequence is that intrusive vegetation disturbing ch values in a habitat protected by the directive cannot be removed. ch values in these areas must succumb to the conflicting nature interests. proposal for directive on geographic information in the eu (inspire) com (2004)516 wishes to establish a unified system for geographic information in europe, for monitoring and safeguarding of nature areas and pollutions control. ch objects and buildings not included, and consequently will not be included in the planning tools emerging from this unified gis system. art 87-89. ( eu treaty rome 1957). ). eea treaty art. 61. rules on state aid rules on state aid interfere with transfer of cultural heritage properties to nonprofit organizations / foundations and state funding of ch in general. table 3: selective directives considered for the development of harmonization criteria for the guideline for monument documentation main category subcategories general description formal / touristic name, national-international code number, current usage, context and landscape, dating geographic information location, historic buildings and monuments ( individual / complex item), linear structures, protected areas, archaeological sites and monuments ( individual / complex item) ownership & legal status ownership status, legal protection status, relevant legislation historical documentation historical resources research, archaeological survey, dating methods, construction history, conservation history architectural documentation architectural type, building elements, materials, building techniques, decoration, electromechanical elements, movable objects, surveying and documentation detailed scale plans, realistic 2d depictions, realistic 3d depictions, visual observations materials condition and structural health assessment maintenance inspections, diagnostic surveys, phenomena & mechanisms of decay, building areas & sampling, non destructive & analytical techniques testing interventions construction phases, conservation history, restoration interventions, repair materials & techniques outer effects impact long term environmental effects, environmental change, anthropic impact and improper use, disasters (floods / landslides / wind, storms and hurricanes / earthquakes and tsunamis / fire / others / avalanches / volcanoes), dangers (coastal dynamics), environmental factors (air / humidity / geological impact / surroundings / flora & fauna / erosion index / blackening index), anthropic (dynamics of demographic density / tourism / liability to theft) _______________________________________________________________________________________ geoinformatics ctu fce 2011 183 main category subcategories vulnerability and risk management preventive care, mitigations, monitoring, expert decision making system (inspection / diagnosis / intervention indices) management, exploitation & maintenance planning preservation plan, development & exploitation plan, accessibility assessment, schedule of maintenance inspections, integrated management through gis scientific research methods and tools r&d, thematic research and databases table 4: main categories and subcategories of recording data for a documentation guideline furthermore, an effective documentation protocol should be able to respond to the necessity of performing inspection, diagnosis and intervention works, leading to knowledge based decision making procedures. it should also conform to the following requirements: observance of the deontology of international conventions that demand the preservation and presentation of historic, sentimental virtues and the architecture of monuments, while preserving the authentic materials, forms and structures. serviceability of the conservation interventions and restorations (so that the building can accept safely the new uses and face the earthquake risk) compatibility of the materials and conservation interventions with authentic materials, the building and its environment sustainability (increase of lifetime, protection of the environment and energy savings, minimization of environmental impact on the monument) therefore, it has become obvious that the proposed guideline should not be a simply integration of existing projects, instead it should build upon current experiences and existing knowledge, encompassing all potential data regarding building structure, non structural elements, architectural value and all other aspects from functionality to historic value. 5. conclusions a combination of scientific, architectural, historic and cultural knowledge and experience of building conservation is indispensable for the study of all immovable cultural heritage objects reversing the current trend in focusing on specific aspects of documentation. an effective documentation protocol should be able to respond to the necessity of performing inspection, diagnosis and intervention works, leading to knowledge based decision making procedures. based on existing initiatives, policies and directives that determine the established practice in the field of cultural heritage documentation, criteria were derived that will allow any new guideline for monument documentation to be harmonized with existing european standards and codes. such a standard methodology for monument documentation could bring the following advantages: a) standardized methodology in the eu countries means comparable data on the condition of cultural heritage in europe, b) standardized data/ outputs are comparable (this means clearly defined database entry without any further need for definition) and c) translation of the standard by a national standardization committee provides a unified terminology. in this context, the development of recommendations and strategies, as described in this work, is a vital step in establishing a guideline for monument documentation that will offer a unified methodology at a european level. 6. references [1] moropoulou, a., chandakas, b., togkalidou, t., karoglou, m., padouvas, e., “a new methodology for quality control and monitoring of historic buildings: a tool for lifetime engineering”, symposium proceedings, 2nd international symposium, ilcdes integrated life-time engineering of buildings and civil infrastructures, kuopio, finland (2003) pp. 269-274 [2] togkalidou, t., moropoulou, a., karoglou, m., padouvas, e., chandakas, b., “system for conservation management of historic buildings incorporating quality control principles”, itecom conference on „innovative technologies and materials for the protection of cultural heritage, industry, research, education: european acts and perspectives‟, technical chamber of greece, athens (2003) pp. 365-376 [3] eu-chic project website: http://www.eu-chic.eu/ [4] vioe – vlaams instituut voor het onroerend erfgoed: www.vioe.be _______________________________________________________________________________________ geoinformatics ctu fce 2011 184 [5] database of cultural heritage in the brussels region: www.irismonument.be [6] database of the cultural heritage in the walloon region: mrw.wallonie.be/dgatlp/ipa/ [7] monumentenwacht vlaanderen: www.monumentenwacht.be [8] integrovaný informační systém památkové péče (iispp): https://iispp.npu.cz/ [9] adabweb – allgemeine denkmaldatenbank: http://www.denkmalpflegebw.de/denkmale/datenbanken/adabweb.html [10] national archive of monuments information system (polemon): http://nam.culture.gr/portal/page/portal/deam/erga/nam [11] acropolis restoration service (ysma): http://www.ysma.gr [12] sistema informativo generale per il catalogo (sigec): http://www.iccd.beniculturali.it [13] carta del rischio: http://www.cartadelrischio.it/eng/index.html [14] inventory of the cultural heritage in the grand-duchy of luxemburg (buildings and landscapes): www.ssmn.public.lu/patrimoine/index.html [15] compilation of data inventory cards which constitutes the national protective inventory: www.mepa.org.mt [16] cultural heritage register: http://rkd.situla.org/ [17] core data index to historic buildings and monuments of the architectural heritage. recommendation r (95) 3 of the committee of ministers of the council of europe to member states on co-ordinating documentation methods and systems related to historic buildings and monuments of the architectural heritage. strasbourg: council of europe, 1995. [18] unesco‟s world heritage list: http://whc.unesco.org/en/list [19] european heritage network: http://www.european-heritage.net/sdx/herein/national_heritage/introduction.xsp [20] guidance on inventory and documentation of the cultural heritage, council of europe, (2009) [21] international council on monuments and sites: http://www.icomos.org/ [22] getty conservation institute: http://www.getty.edu/conservation/ [23] recording, documentation and information management an international initiative for historic monuments and sites: http://extranet.getty.edu/gci/recordim/ [24] european committee for standardization – tc 346: http://www.cen.eu/ [25] article 151 – ec treaty (maastrich 1993): http://europa.eu.int/eur-lex/en/treaties/selected/livre234.html [26] european council, reference texts in cultural heritage: conventions: http://www.coe.int/t/e/cultural_cooperation/heritage/resources/reftxtculther.asp [27] european conference declaration for improving cultural heritage research. international conference sustaining europe‟s cultural heritage: from research to policy. university college of london, ec commission (2004) www.ucl.ac.uk/sustainableheritage/conference-proceedings/london-declaration.html [28] european parliament. directorate general for research, luxembourg, october 2001 [29] council resolution of 21 january 2002 on the role of culture in the development of the european union. official journal c 032 , 05/02/2002 p. 0002 – 0002: http://europa.eu.int/eurlex/lex/lexuriserv/lexuriserv.do?uri=celex:32002g0205(02):en:html [30] council resolution of 26 may 2003 on the horizontal aspects of culture: increasing synergies with other sectors and community actions and exchanging good practices in relation to the social and economic dimensions of culture. official journal c 136 , 11/06/2003 p. 0001 – 0002: http://europa.eu.int/eurlex/lex/lexuriserv/lexuriserv.do?uri=celex:32003g0611(01):en:html [31] directive 2000/60/ec of the european parliament and of the council, establishing a framework for the community action in the field of water policy. http://europa.eu.int/eur-lex/pri/en/oj/dat/2000/l_327/l_32720001222en00010072.pdf [32] communication from the commission to the council and the european parliament thematic strategy on air pollution com(2005) 446 final, brussels, 21.9.2005: http://europa.eu.int/eurlex/lex/lexuriserv/site/en/com/2005/com2005_0446en01.pdf [33] the amsterdam treaty: http://europa.eu.int/scadplus/leg/en/s50000.htm [34] council directive of 27 june 1985 on the assessment of the effects of certain public and private projects on the environment. 85/337/eec. official journal no. l 175 , 05/07/1985 p. 0040 – 0048: http://europa.eu.int/comm/environment/eia/full-legal-text/85337.htm [35] http://eur-lex.europa.eu/lexuriserv/site/en/com/2004/com2004_0038en01.pdf [36] article 3 (t) of the treaty on european union. official journal c 191, 29 july 1992 http://europa.eu.int/eurlex/lex/en/treaties/dat/11992m/htm/11992m.html [37] chandakas v., “criteria and methodology for the quality control of restoration works – protection of monuments and historic buildings”, phd dissertation, (2004), national technical university of athens, supervisor prof. a. moropoulou [38] kioussi a. “development of a data base for the documentation of monuments and historic buildings. integration into a system of total quality control, diagnostic and conservation intervention study”, msc dissertation, (2005), national technical university of athens, supervisor prof. a. moropoulou design of a survey net for metric survey documentation of a historical building design of a survey net for metric survey documentation of a historical building zdeněk poloprutský laboratory of photogrammetry, department of geomatics, czech technical university in prague, czech republic zdenek.poloprutsky@fsv.cvut.cz abstract. this paper deal with the design and the creation of a survey net during the metric survey documentation of the current state of a historical building. the paper aims to define the general rules for the design of the survey net, which are based on the least squares method (lsm), huber´s m-estimation and the requirement of practical heritage preservation. the paper presents three different examples of survey nets and types of historical buildings. keywords: adjustment of survey net; detailed survey; gnss; historical building; least squares method (lsm); metric survey documentation; reference map scale. 1. introduction the paper deals with the design and the construction of a survey net. a survey net provides the default spatial framework for the metric survey documentation of the current state of a historical building. currently, it is already well described by the least squares method (lsm), which is a popular option for calculating the coordinates of survey points in a survey net. for this purpose, lsm is often modified to increase the credibility of their results. historical buildings are often very complex shape structures, thus, their quality the metric survey documentation requires an individual approach. this paper tries to formulate general principles for the design and construction of survey nets and for detailed surveys and to summarize the versions of adjustment calculations of a survey net. the outcomes of the paper were tested in three case studies. 2. general issues the metric survey documentation of historical buildings is the essential part of the heritage preservation practice and scientific research into historical architecture taking primarily the form of the building archaeological survey [13]. good-quality documentation used as the basis for projecting, recognition of projected rebuilding and scientific research, etc. an example of scientific research is e.g. evaluation of the construction development of a particular building, comparison of common features of several buildings, etc. the metric survey documentation of historical building grows into the unique source of information in the case of the destruction of a building or its part an archival document, a basis for reconstruction, etc. the submitter of the metric survey documentation has to know the purposes why the metric survey documentation is made and must be aware of the financial means available for it. he must clearly define his requirements and explain them to the contractor. the content of the assignment can be summarized in accordance with the current methodology of the national heritage institute npú [23]: geoinformatics fce ctu 16(1), 2017, doi:10.14311/gi.16.1.4 63 http://orcid.org/0000-0002-9305-4897 https://doi.org/10.14311/gi.16.1.4 http://creativecommons.org/licenses/by/4.0/ z. poloprutský: design of a survey net in a historical building 1. precise determination of the subject and the survey area 2. coordinate system and the vertical coordinate system 3. required level of detail 4. required level of accuracy 5. required outputs 6. the method of calculating the price and the conditions for its modifications in future 7. determining the date of submission of the survey 8. conditions for taking over the finished survey 2.1. object and area of surveys the survey net has to respect the assignment and connect particular detailed surveys among each other into one whole unit. it is possible to predict the basic dimensions of the survey net and the reference map scale from the area of interest and level of detail. for the design of the survey net, it is useful to use the open data, i.e. base maps, which are freely available on the internet, such as geoportal čúzk [3], mapy.cz [19], open street maps [11], etc. the first version of the design of the survey net can be based on them. it may be useful to prepare field sketches with the design of the survey net for the upcoming detailed surveys. in some cases, it is advisable to print field sketches, see table 1. table 1: overview of dimensions of survey nets in dependence on reference map scales and paper sizes [12]. paper sizes [mm] by basic dimensions of survey nets, i.e. width and height, iso 216 / din 476 in dependence on reference map scales [m] a width × height 1 : 20 1 : 50 1 : 100 1 : 200 1 : 500 0 841×1189 16.8×23.8 42.1×59.5 84.1×118.9 168.2×237.8 420.5×594.5 1 594×841 11.9×16.8 29.7×42.1 59.4×84.1 118.8×168.2 297.0×420.5 2 420×594 8.4×11.9 21.0×29.7 42.0×59.4 84.0×118.8 210.0×297.0 3 297×420 5.9×8.4 14.9×21.0 29.7×42.0 59.4×84.0 148.5×210.0 4 210×297 4.2×5.9 10.5×14.9 21.0×29.7 42.0×59.4 105.0×148.5 2.2. selection of geodetic reference systems the selection of a geodetic reference system results from the aims of the work, available equipment, base maps and required level of detail (lod) and level of accuracy (loa). furthermore, the selection of a geodetic reference system often depends on whether the metric documentation will be used in specialized software cad, gis, etc. mostly it is easiest to use a local coordinate system. for connect to a geodetic reference system points of horizontal geodetic control and vertical geodetic control must be available, or sufficiently precise equipment for the real-time kinematic (rtk) surveying. geoinformatics fce ctu 16(1), 2017 64 z. poloprutský: design of a survey net in a historical building the datum of uniform trigonometric cadastral network (s-jtsk), epsg: 5514, and the baltic vertical datum after adjustment (bpv), epsg: 5705, are mostly used in the czech republic. [17] the selection of national geodetic reference systems enables the supplementation of surveys of the other datasets which are provided by the czech office for surveying, mapping and cadastre (čúzk) [17, 3]. 2.3. choices of levels of detail and accuracy for surveys lod must match the importance of the surveyed building, the purpose for which the given documents are processed, and the resources that are available. detailed and accurate metric documentation is time consuming and therefore costly. as practice confirms, surveys should always be done to allow the outcomes to be more detailed than the primary documentation. this requirement typically saves a lot of processing time but is not feasible at all times because its limits are related to the choice of survey methods and the cost of labor [23]. it is useful to classify the types of metric documentation into four basic classes: 1. tentative mostly sketch, dimensions are estimated, lod corresponds to a reference map scale of 1 : 100 and below; 2. basic usually a metric documentation with side measures, lod corresponds to a reference map scale of 1 : 50 and below; 3. detailed based on a full geodetic measurement, lod corresponds to a reference map scale of 1 : 50; 4. shape trustworthy based on advanced measurement techniques, such as geodetic measurements, laser scanning, photogrammetry etc., lod corresponds to a reference map scale of 1 : 20. it is convenient for detailed surveys so that the survey net can be considered faultless. this can be achieved by measuring the survey net with a higher accuracy than the loa of the required outcomes are. the survey net can be considered faultless in the light of the metric documentation if its standard deviation of distance σdij does not exceed the limit standard deviation of the distance σtdij , which corresponds to the thickness of the thinnest line of the metric documentation in fact xtl. the horizontal distance d between points i [xi,yi] and j [xj,yj ] is defined as dij = √ (xj −xi)2 + (yj −yi)2. (1) the standard deviation of the distance σdij can be defined in conformity with (1) as σdij = √√√√(−∆xij dij )2 ·σ2xi + ( ∆xij dij )2 ·σ2xj + ( − ∆yij dij )2 ·σ2yi + ( ∆yij dij )2 ·σ2yj where where simplifications can be defined, such as • the circle of deviations: σx ≈ σy ≈ σxy and • the uniform accuracy of the survey net: σxyi ≈ σxyj ≈ σxy . geoinformatics fce ctu 16(1), 2017 65 z. poloprutský: design of a survey net in a historical building after that, σdij is defined as σdij = σxy · √ 2. (2) after that, the limit standard deviation of coordinates σtxy can be deducted in conformity with (2) as σtxy = σtdij√ 2 = xtl ·m√ 2 (3) where m is defined as the scale number of the metric documentation. the thickness of a line xtl follows the typographic and cartographic principles that can be taken from the cadastral map [16, 12], see table 2. table 2: accuracy analysis of a survey net in dependence on line thicknesses and reference map scales. line thicknesses standard deviations σtxy by reference map scales [mm] [mm] 1 : 20 1 : 50 1 : 100 1 : 200 1 : 500 0.13 1.8 4.6 9.2 18.4 46.0 0.16 2.3 5.7 11.3 22.6 56.6 0.18 2.5 6.4 12.7 25.5 63.6 0.20 2.8 7.1 14.1 28.3 70.7 3. mathematical issues in the field of geodesy, the metric survey documentation of historical buildings can be classified in a detailed survey, in reference map scales of 1 : 20 1 : 500. the detailed survey is defined as “the observation of objects from points of geodetic point fields, or auxiliary survey points or previously determined detailed survey point” [20]. in the field of geodesy, observations are repeated or other quantities are measured to avoid gross errors and to increase the accuracy of final results. therefore, the final results are not unambiguous and redundant observations must be adjusted. this approach is most used by the construction of survey nets, which are the necessary geodetic and height basis for detailed surveys. the least squares method (lsm) has been the most used adjustment method in the field of geodesy since 1806. lsm is the process of adjustment of observed or derived, so-called intermediate, quantities. this process is based on the condition σpvv = min., where p are weights of measurements and v are residuals of the observed quantities from the adjustment. lsm is often supplemented by other methods in order to optimize the final results [7]. 3.1. processing of observations in general, observations can be divided into two basic sets: l . . . a set of direct measurements of r elements, geoinformatics fce ctu 16(1), 2017 66 z. poloprutský: design of a survey net in a historical building table 3: comparison of direct and intermediate measurements in a survey net. direct measurements intermediate measurements xi,yi [°]/[m] geocentric coordinates xi,yi [m] geocentric coordinates zi from gnss survey zi from gnss survey φij [gon] horizontal directions ωij [gon] horizontal angles ζ′ij [gon] zenithal distances ζij [gon] reduced zenithal distances on join of mark stones d′ij [m] slope distances → dij [m] reduced slope distances on join of mark stones vp [m] heights of instruments d0ij [m] horizontal distances vc [m] heights of targets hij [m] height differences on join of mark stones levhij [m] levelled height differences levhij [m] levelled height differences l .. . a set of intermediate measurements of m elements. the processing of observations consists in the application of math functions between sets l and l, i.e. in the calculation of intermediate measurements m from direct measurements r, for which the inequality is applied r ≥ m. the analyses are based on the estimation of the precision characteristics of the math functions between sets l and l and the verification of their values using so-called significance tests of statistical hypotheses. the weights of intermediate measurements p are defined as p = q−1l = ( d ·ql ·dt )−1 (4) where p represents the weight matrix of intermediate measurements, ql represents the covariance matrix of intermediate measurements, ql represents the covariance matrix of direct measurements and d represents the model matrix of the linear relationships between differential changes of intermediate and direct measurements, the so-called matrix of plan. the math functions for the processing and analyses of intermediate measurements are described in detail in [7]. 3.2. adjustment of a survey net by the adjustment of redundant observations for the adjustment of a survey net it is necessary to have ready a set of intermediate measurements, see table 3, a set of their standard deviations and a set of approximated coordinates of adjusted points. the unknowns are the coordinates of adjusted points, respectively orientation shifts on adjusted points, in the adjustment calculation. the equation of residual is made for each observation. the survey net can be aligned asa fixed or loose net. the fixed survey net contains minimally two points of known coordinates, ie. planar or spatial ones, which are part of the net and their coordinates are fixed [7]. the following subsections deal with the adjustment of the fixed survey net in more detail, including modifications considering the inaccuracy of fixed point coordinates, and robust methods for finding remote observations. geoinformatics fce ctu 16(1), 2017 67 z. poloprutský: design of a survey net in a historical building φij i [x,y,h] j [x,y,h] 0dij dij d´ij φij α ωij ζij *ζij *ζij vp vc δv oζij φi φij figure 1: configuration of direct and intermediate measurements. fixed survey net in general, the adjustment of the fixed survey net by lms contains three basic sets: l . . . a set of direct measurements of r elements, l .. . a set of intermediate measurements of m elements, x ... a set of unknowns of n elements and p conditions. for the fixed survey net, the configuration condition p = 0 is defined, i.e.: r ≥ m > n−p ⇒ r ≥ m > n. (5) the math functions for the adjustment of the fixed survey net by lms are described in detail in [7]. the summary of the most important of them is presented here: v̂ = a · d̂x + l′, (6) x̂ = x0 + d̂x = x0 − ( at ·p ·a )−1 ·at ·p · l′, (7) and l̂ = l + v̂ (8) where p represents the weight matrix of intermediate measurements l, a represents the matrix of plan intermediate measurements l and l′ is defined as l′ = f(x0) −l. geoinformatics fce ctu 16(1), 2017 68 z. poloprutský: design of a survey net in a historical building lms is defined as an iterative process, i.e. xj+10 = x̂ j. a condition which completes the iterative calculation with the selected accuracy may be defined as vi ≈ vii in that case vi = a · d̂x + l′ ≈ vii = f(x̂) −l. (9) survey net with residuals in bases in some cases, it may be appropriate to connect the survey net to the points of known coordinates and consider their precision characteristics. by doing this, the aligned coordinates of the network points include not only local accuracy to the connection points but also their global accuracy with respect to the geodetic reference system. the disadvantage of this procedure is the change of the coordinates of the connection, i.e. “fixed”, points. the coordinates of the connecting points are introduced into the calculations as pseudoobservations into the set of intermediate measurements l and the weight matrix p [7]: p = [ q−1l 0 0 q−1xy z ] . (10) this type of adjustment is advantageous in cases where the connection points are determined by gnss. to a limited extent, the adjustment can be used to connect to geodetic bases. if the coordinate increments correspond to the a priori precision of the coordinates of the connection points they can be neglected. otherwise, the connection point must be eliminated from the alignment or “fixed” in the adjustment calculation. robust methods for finding remote observations in a survey net robust statistical methods modify the classical methods of adjustment by lms with statistical tests. they have a higher resistance to the degradation of results by remote observations. for practical application in geodetic tasks, m-estimates are the most suitable. they allow sufficient flexibility and can be computationally managed even in generalized linear model solutions. the principle of the application of robust m-estimates is based on the iterative modification of lms, which respects gradual changes in the weights of individual observations. the change of the robust measurement scale w depends on the value of its correction v, i.e. on the estimate of the unknown x. the remote the observation, the lower its sturdy weight is and thus the lower its impact on the adjustment. in this way, outliers are gradually removed to give a robust estimate the independent effect of remote observations. the robust measurement scales w can be input to the weight matrix w as w =   w1 0 · · · 0 0 w2 · · · 0 ... ... ... ... 0 0 · · · wn   (11) geoinformatics fce ctu 16(1), 2017 69 z. poloprutský: design of a survey net in a historical building figure 2: huber´s m-estimation weight function [7, 21]. which can be input into the solution of a set of linearised equations d̂x in the general j-iterative step of lms as d̂x (j) = − ( at ·w (j) ·p ·a )−1 ·at ·w (j) ·p · l′. (12) the application of robust estimates for the detection of outliers is a highly effective method bbut its usefulness is conditioned on many factors and input conditions. in order to achieve corresponding results, a sufficient number of redundant observations and a low incidence of remote observations in the measurement set l must be ensured [7, 21]. 4. case studies this section presents case studies in which the described computing methods were applied. 4.1. survey net: kestřany in this case, the assignment was to design and a build the survey net which will serve as a spatial framework for detailed surveys. the subjects of detailed surveys were selected rooms in the historical building called the upper fortress in the village of kestřany, písek district, czech republic. ground laser scanning was chosen as the main method of detailed surveys. the metric survey documentation of a part of the historical building was the assignment for two bachelor theses. [18, 4] it was surveyed both in the exterior and in the interiors, on the ground floor and in cellars. the detailed surveys were carried out together with the reconnaissance of the terrain and the construction of the survey net. the leica ts02 flexline [9] was used to measure the net, the connection to s-jtsk and bpv was made by rtk surveying on selected points of the survey net, i.e. 5001-5006. the leica viva gs15 gps [9] was used for rtk surveying. digital field books, i.e. of the total station, were processed in the kokeš software [6]. the processing and analyses of intermediate measurements were made using the custom production source code in the matlab software [10]. the survey net was aligned in the gama-local software [2]. the horizontal distances were corrected for the impact of křovák´s universal conformal conical projection and the impact of the altitude. for all calculations and accuracy testing, the 5% level of significance was used. the global position precision corresponds to the lod and loa of the metric survey documentation in a reference map scale 1 : 50. geoinformatics fce ctu 16(1), 2017 70 z. poloprutský: design of a survey net in a historical building figure 3: the scheme of the survey net in kestřany. 4.2. survey net: radkov the radkov archaeological site is located in the svitavy district, the czech republic. in this case, the assignment was to design and build a survey net which will serve as a single spatial framework for detailed mapping, geophysical surveys and digital terrain models (dtm) based on datasets from airborne laser scanning (als) [14]. two-stage measurement was carried out. the first was done together with the reconnaissance of the terrain and the construction of the survey net. leica ts06 flexline [9] was used and the trimble r8s gnss [22] provided the connection to s-jtsk and bpv. the latter was geoinformatics fce ctu 16(1), 2017 71 z. poloprutský: design of a survey net in a historical building table 4: basic parameters of the adjustment of the survey net in kestřany. type of coordinates xyz xy z adjusted 14 0 0 constrained 2 0 0 fixed 0 0 0 summary 14 0 0 set statistics directions 29 sets of directions 11 distances 20 residual equations 74 coordinates 6 redundant observations 21 height differences 19 unknowns 53 summary 74 defects of survey net 0 precision statistics m0 a priory 1 m′0 a posteriori 0.94 [pvv] 18.45 m0/m ′ 0 0.937 95% interval (0.700; 1.300) max. σxy [mm] 8.5 average σxy [mm] 5.0 focused only on the survey net. leica nova ts50 was used for this measurement [9]. it was not possible to perform a net calculation from the datasets of the first measurement. that is why the coordinates were calculated by traverses without orientation and oriented distances. the horizontal distances were corrected for the impact of křovák´s universal conformal conical projection and the impact of the altitude. coordinates are in s-jtsk and bpv. the second measurement was aligned in the easynet software [1] as a geodetic micronetwork. for all calculations and accuracy testing, the 5% level of significance was used and huber’s method was chosen for robust analyses. in the next step, the coordinates obtained from the network alignment were transformed to the coordinates calculated from the first alignment of the congruent transformation with the equalization of the transformation key. this calculation method preserved the internal precision of the survey net and docked it in s-jtsk. the global position precision is determined as standard deviations to identify identical points by the rtk surveying, i.e. σxy = 0.021 m and σz = 0.05 m. the global position precision corresponds to the lod and loa of the metric survey documentation in a reference map scale 1 : 200. 4.3. survey net: holubice the church of nativity of virgin mary is located in the village of holubice, prague west district, czech republic. in this case, the assignment was to design and build a survey net which served as a spatial framework for detailed surveys. the output was the basis for the building archaeological survey [8] and the 3d model based on the photogrammetric processing of the image data [5] of the historical sacral building. a two-step detailed survey was performed. the surveys were carried out together with the geoinformatics fce ctu 16(1), 2017 72 z. poloprutský: design of a survey net in a historical building figure 4: the plan of the fortified hilltop settlement radkov [15]. reconnaissance of the terrain and the construction of the survey net. the first survey was performed by leica ts02 flexline [9] and the second survey was performed by leica ts06 flexline [9]. since the stabilization marks of all the witness points of the trigonometric point forming the church tower were not found in the terrain, it was not possible to connect the network to s-jtsk. therefore, it was aligned as a local network. the network was height-connected to bpv. digital field books, i.e. of the total station, were processed in the kokeš software [6]. the processing and analyses of intermediate measurements were made using the custom production source code in the matlab software [10]. the survey net was aligned in the gama-local software [2]. the horizontal distances were corrected for the impact of křovák´s universal conformal conical projection and the impact of the altitude. for all calculations and accuracy testing, the 5% level of significance was used. the local position precision corresponds to the lod and loa of the metric survey documentation in a reference map scale 1 : 20. 5. conclusion this section summarizes and explains the principles which it is preferable to observe in the design of survey nets in historical buildings. geoinformatics fce ctu 16(1), 2017 73 z. poloprutský: design of a survey net in a historical building table 5: basic parameters of the adjustment of the survey net in radkov, internal precision. type of coordinates xyz xy z adjusted 13 0 0 constrained 0 0 0 fixed 0 0 0 summary 13 0 0 set statistics directions 130 sets of directions 14 slope distances 137 residual equations 401 zenithal distances 134 redundant observations 352 height differences 0 unknowns 49 summary 401 defects of survey net 4 precision statistics m0 a priory 1 m′0 a posteriori 0.704m0/m′0 0.704 max. σxy [mm] 0.56 average σxy [mm] 0.33 figure 5: the scheme of the survey net in holubice. geoinformatics fce ctu 16(1), 2017 74 z. poloprutský: design of a survey net in a historical building table 6: basic parameters of the second phase of the adjustment of the survey net in holubice. type of coordinates xyz xy z adjusted 11 1 0 constrained 3 2 1 fixed 0 0 1 summary 11 1 1 set statistics directions 17 sets of directions 6 distances 14 residual equations 65 coordinates 20 redundant observations 24 height differences 14 unknowns 41 summary 65 defects of survey net 0 precision statistics m0 a priory 1 m′0 a posteriori 1.16 [pvv] 32.33 m0/m ′ 0 1.162 95% interval (0.719; 1.281) max. σxy [mm] 2.4 average σxy [mm] 1.7 survey net configuration towards the building generally, the configuration of the survey net is advantageous when the surveyed object is disposed at the centroid of the survey net. the centroid of the survey net may not be stabilized in the field; it may be a mathematically determined point that serves as a reference point for which, for example, mathematical reductions can be applied to a geodetic reference system. from the expected reference map scale of the metric survey documentation and lod of the detailed survey, the basic dimensions of the survey net can be estimated, see table 1. geometry of the survey net in this case, the geometry of the survey net means the deployment of survey points of the survey net in the vicinity and within the documented building. the displacement of survey stations in the survey net influences the shape of the net, its density and its side ratios. these parameters affect the calculation and accuracy of the survey net. each survey point should be chosen in order to achieve the maximum possible visibility to the surrounding survey points of the survey net to ensure redundant observations. this allows adjustment of the survey net by lsm. layout and selection of the stabilization of survey points when stabilizing survey points in a building, it is preferable to proceed from “small to large” because the stabilization of the net provides more freedom to choose the optimal net geometry in the exterior than in the interior. the survey net should have a documented object located in its centroid in the exterior. in the case of the stabilization of survey points in the interior, the survey point is ideally located in each room, because it is advantageous to know the altitude in each room for the geoinformatics fce ctu 16(1), 2017 75 z. poloprutský: design of a survey net in a historical building purposes of the metric survey documentation of the historical building. if it is possible to view the door openings through multiple rooms, it is preferable to select the survey points so that they can be densified by means of temporary stations. surveying equipment the basics of measuring equipment are the so-called set for the three-tripod system [20] and a tape measure, or a stake with stands, or a combination of both. the interior may need additional lighting, for which it is necessary to ensure the luminaire, e.g. halogens, power supply and if necessary additional extension cables. in the exterior, it may be advantageous to use the gnss apparatus to connect the survey net to the reference systems. additionally, it may be advantageous to use a levelling kit, i.e. a levelling instrument, a tripod, a levelling staff and a footplate, for levelling. precision analyses, adjustment of a survey net and processing of observations in practice, measurements in historical buildings can be compared to measurements in underground spaces, i.e. the configuration of the survey net is so called “thin”, which causes fewer redundant observations and unilaterally connecting survey points. the effect of centering of an instrument and targets shows on the measuring accuracy if the survey net dimensions are in the order of tens to hundreds of meters. survey points in the interior must be connected through window and door openings with survey points in the exterior. this results in measurements under different lighting conditions that can affect the accuracy of targeting refraction. it is advantageous to process precision analyses before the survey and mathematical modelling of inputs a combination of software for the adjustment of survey nets using lms, basic geodetic calculations and base maps. for precision analyses before the measurement and for the measurement itself, it is appropriate to follow several general principles: • for a more accurate estimate of the accuracy of angular measurements, it is advisable to measure minimally in two groups. • if possible, slope distances, zenithal distances and height differences should be measured against each other. • two planimetric survey points are required to connect to the positional coordinate reference system, more planimetric survey points are more appropriate. • one height point is required to connect to the vertical coordinate reference system, more height points are more appropriate. the method of adjustment of a survey net depends on the configuration and geometry of the survey net. the use of robust adjustment methods is worth in the case of an area survey net with a large number of redundant observations, i.e. 50% and above, see easynet software [1]. robust adjustment methods may fail in the case of “sparse” survey nets with a small number of redundant observations, i.e. up to 50%. other software must be used, such as the gama-local software [2]. geoinformatics fce ctu 16(1), 2017 76 z. poloprutský: design of a survey net in a historical building the outcomes from detailed surveys must be processed according to the requirements defined in the specification. as a rule, they are further developed in specialized cad, gis software, etc. acknowledgements this work was supported by the grant agency of the czech technical university in prague, grant no. sgs17/068/ohk1/1t/11. references [1] adjust solutions. easynet. 2016. url: http://adjustsolutions.cz/en/easynet/ (visited on 04/17/2017). [2] aleš čepek. gnu gama. comp. software. gnu operating system, 1998-2017. url: https://www.gnu.org/software/gama/. [3] čúzk. geoportal čúzk: access to map products and services. 2010. url: http : / / geoportal.cuzk.cz/ (visited on 04/12/2017). [4] kristýna doležalová. upper fortress kestřany (písek) – metrical documentation of selected part. ed. by jindřich hodač. barchelor thesis. prague: departments of geomatics, fce ctu in prague, 2017. [5] m. faltýnová et al. “complex analysis and documentation of historical buildings using new geomatic methods”. in: stavební obzor civil engineering journal 25.4 (2016). issn: 18052576. doi: 10.14311/cej.2016.04.0027. [6] gepro. kokeš. 2017. url: http://www.gepro.cz/produkty/kokes/ (visited on 04/17/2017). [7] miroslav hampacher and martin štroner. zpracování a analýza měření v inženýrské geodézii. praha: katedra speciální geodézie, české vysoké učení technické v praze, 2011. isbn: 978-80-01-04900-6. [8] milena hauserová et al. “the church and its patrons: two chapters from the history of building the rotunda of the nativity of the virgin mary in holubice”. in: dějiny staveb 2016. 17. mezinárodní konference dějiny staveb 2016. vol. 17. pilsen, czech republic: klub augusta sedláčka, 2016, pp. 7–16. isbn: 978-80-87170-48-9. [9] leica geosystems. leica geosystems. 2017. url: http://leica-geosystems.com/en/ (visited on 06/02/2017). [10] mathworks. matlab. 2017. url: https://www.mathworks.com/products/matlab. html (visited on 04/17/2017). [11] © openstreetmap contributors. openstreetmap. openstreetmap. 2017. url: http : //www.openstreetmap.org/ (visited on 07/04/2017). [12] paper size. in: wikipedia. wikimedia foundation, may 31, 2017. url: https://en. wikipedia . org / w / index . php ? title = paper _ size & oldid = 783113730 (visited on 06/03/2017). [13] jana pařízková čevonová et al. building archaeology survey: a methodology. ed. by jan beránek and petr macek. 70. oclc: 949216987. prague: national heritage institute, 2015. isbn: 978-80-7480-037-5. geoinformatics fce ctu 16(1), 2017 77 http://adjustsolutions.cz/en/easynet/ https://www.gnu.org/software/gama/ http://geoportal.cuzk.cz/ http://geoportal.cuzk.cz/ https://doi.org/10.14311/cej.2016.04.0027 http://www.gepro.cz/produkty/kokes/ http://leica-geosystems.com/en/ https://www.mathworks.com/products/matlab.html https://www.mathworks.com/products/matlab.html http://www.openstreetmap.org/ http://www.openstreetmap.org/ https://en.wikipedia.org/w/index.php?title=paper_size&oldid=783113730 https://en.wikipedia.org/w/index.php?title=paper_size&oldid=783113730 z. poloprutský: design of a survey net in a historical building [14] z. poloprutský, m. cejpová, and j. němcová. “non-destructive survey of archaeological sites using airborne laser scanning and geophysical applications”. in: isprs international archives of the photogrammetry, remote sensing and spatial information sciences xli-b5 (june 15, 2016), pp. 371–376. issn: 2194-9034. doi: 10.5194/isprsarchives-xli-b5-371-2016. [15] zdeněk poloprutský. the plan of the fortified hilltop settlement radkov. specialized map with expert content. praque, 2016. url: http://lfgm.fsv.cvut.cz/main.php? cap=0&zal=475&lang=en (visited on 03/23/2016). [16] česká republika. 357/2013 sb. o katastru nemovitostí (katastrální vyhláška). nov. 1, 2013. url: https : / / portal . gov . cz / app / zakony / zakon . jsp ? page = 0 & nr = 357~2f2013&rpp=15#seznam (visited on 01/06/2016). [17] česká republika. 430/2006 sb. o stanovení geodetických referenčních systémů a státních mapových děl závazných na území státu a zásadách jejich používání. aug. 16, 2006. url: http : / / portal . gov . cz / app / zakony / zakonpar . jsp ? idbiblio = 63017 & fulltext = &nr = 430~2f2006 & part = &name = &rpp = 15 # local content (visited on 12/31/2015). [18] zuzana richtrová. upper fortress kestřany (písek) – metrical documentation of selected part. ed. by jindřich hodač. barchelor thesis. prague: departments of geomatics, fce ctu in prague, 2016. [19] © seznam.cz. mapy.cz. mapy.cz. 2017. url: https://mapy.cz/ (visited on 07/04/2017). [20] terminological commission of the czech office for surveying mapping and cadastre. terminological dictionary vúgtk. in: terminological dictionary of geodesy, cartography and cadastre. zdiby: vúgtk, 2016. url: http : / / www . vugtk . cz / slovnik / index.php?jazykova_verze=cz (visited on 01/06/2016). [21] pavel třasák and martin štroner. “outlier detection efficiency in the high precision geodetic network adjustment”. in: acta geodaetica et geophysica 49.2 (2014), pp. 161– 175. issn: 2213-5812. doi: 10.1007/s40328-014-0045-9. [22] trimble. trimble: transforming the way the world works. 2017. url: http://www. trimble.com/ (visited on 06/02/2017). [23] jan veselý. metric survey documentation of historic buildings for use in heritage management. 49. oclc: 907529016. prague: national heritage institute, 2014. isbn: 97880-86516-79-0. geoinformatics fce ctu 16(1), 2017 78 https://doi.org/10.5194/isprs-archives-xli-b5-371-2016 https://doi.org/10.5194/isprs-archives-xli-b5-371-2016 http://lfgm.fsv.cvut.cz/main.php?cap=0&zal=475&lang=en http://lfgm.fsv.cvut.cz/main.php?cap=0&zal=475&lang=en https://portal.gov.cz/app/zakony/zakon.jsp?page=0&nr=357~2f2013&rpp=15#seznam https://portal.gov.cz/app/zakony/zakon.jsp?page=0&nr=357~2f2013&rpp=15#seznam http://portal.gov.cz/app/zakony/zakonpar.jsp?idbiblio=63017&fulltext=&nr=430~2f2006&part=&name=&rpp=15#local-content http://portal.gov.cz/app/zakony/zakonpar.jsp?idbiblio=63017&fulltext=&nr=430~2f2006&part=&name=&rpp=15#local-content https://mapy.cz/ http://www.vugtk.cz/slovnik/index.php?jazykova_verze=cz http://www.vugtk.cz/slovnik/index.php?jazykova_verze=cz https://doi.org/10.1007/s40328-014-0045-9 http://www.trimble.com/ http://www.trimble.com/ z. poloprutský: design of a survey net in a historical building introduction general issues object and area of surveys selection of geodetic reference systems choices of levels of detail and accuracy for surveys mathematical issues processing of observations adjustment of a survey net by the adjustment of redundant observations case studies survey net: kestřany survey net: radkov survey net: holubice conclusion ___________________________________________________________________________________________________________ geoinformatics ctu fce 314 photo scanner 3d survey for monitoring historical monuments. the case history of porta praetoria in aosta paolo salonia1, tommaso leti messina1, andrea marcolongo1, lorenzo appolonia ° 1 cnr, institute for technologies applied to cultural heritage, rome research area, via salaria, km 29.300, 00016 monterotondo st. (rome) italy paolo.salonia@itabc.cnr.it, tommaso.letimessina@itabc.cnr.it, a.marcolongo@arch3.eu ° direzione ricerca e progetti cofinanziati dipartimento soprintendenza per i beni e le attività culturali assessorato istruzione e cultura regione autonoma valle d'aosta paolo.salonia@itabc.cnr.it, tommaso.letimessina@itabc.cnr.it, a.marcolongo@arch3.eu keywords: terrestrial photo scanner 3d, uav, image processing, points clouds rgb, multiscale, gis monitoring cultural heritage. abstract: accessibility to cultural heritage is one of the most important factors in cultural heritage preservation, as it assures knowledge, monitoring, public administration management and a wide interest on cultural heritage sites. nowdays 3d surveys give the geometric basis for an effective artefact reconstruction but most of the times 3d data are not completely and deeply investigated to extract other useful information on historical monuments for their conservation and safeguard. the cultural heritage superintendence of aosta decided to run a time continual project of monitoring of the praetorian roman gate with the collaboration of the itabc, cnr of italy. the praetorian roman gate in aosta, italy, of augustus ages, is one of the most well-known roman monumental gates, it is a double gate with three arches each side, 12 meters high, 20 meters wide, made of pudding stone ashlars, badoglio, travertine, marble blocks and other stone insertion due to restorations between 1600 and 1950. in years 2000 a final restoration intervention brought the gate at the present state of art, within the frame of a restoration and conservation building site with the purpose of treat the different decay pathologies and conditions. a complete 3d geometric survey campaign has been the first step for the monitoring of the gate morphologic changes and decay progress in time. the main purpose is to collect both quantitative data, related to the geometry of the gate, and the qualitative data, related to the chromatic change on the surface due to the stone decay. the geometric data with colour information permits to associate materials and stone pathologies to chemical or mechanical actions and to understand and analyse super ficial decay kinetics. the colours survey will also permit to directly locate on the 3d model areas of different stratigraphic units. the project aims to build a rigorous quantitative-qualitative database so to be uploaded into a gis. the gis will become the monitoring main means. considering the huge dimension of the gate and its urban location a multi-scale approach has been considered. controlled and free images have been taken from the ground and the top of the gate so to reconstruct all the walls and the upper cover. a topographic survey has been done so to be able to control and relate all the different acquisitions. it has been chosen a photo scanner 3d system. it is a photogrammetry-based survey technology for point clouds acquisition and 3d models configuration, from digital images processing. this technology allows to obtain point clouds (xyz coordinates) with rgb information and geometries at different levels of complexity by processing a number of images taken with a limited set of constraints, with the use of a simple acquisition equipment and through an image matching algorithm (zscan, by menci software). due to the high walls of the arch gates, the higher part has been surveyed with a remote controlled drone (uav unmanned aerial vehicle) with a digital camera on it, so to take pictures up to the maximum altitude and with different shooting angles ( 90 and 45 degree). this is a new technology which permits to survey inaccessible parts of a high monument with ease and accuracy, by collecting redundant pictures later bound together by an image block algorithm. this paper aims to present the survey experience architectural monuments trough the application of a trifocal quick photogrammetric system, in surveying at different scales and for different purposes. 1. the project 1.1 the case history monument “…one of the best monuments of the roman military architecture is undoubtedly the porta pretoria d'aosta, called porta s. urso, because of the near ancient church, and later named gate of the trinity from a chapel built in recent mailto:a.marcolongo@arch3.eu ___________________________________________________________________________________________________________ geoinformatics ctu fce 315 centuries. but the living traditions gives it, at least from the revival of letters on, the name of porta pretoria still the true appellation. (carlo promis, le antichità di aosta, capo vii, § 1. porta pretoria nello stato presente, stamperia reale, torino, 1862, pagg. 142-156) [1]. the praetoria door, or even praetorian gate or praetorians doors, is the gateway to the east of the roman town of augusta praetoria salassorum (today aosta). it dates to the founding of the city and is the largest among those that have reached us from the roman world, and “may be compared to the gate of mars or porta nigra in treviri, rhenish prussia... ” (carlo promis, op. cit.). figure 1: praetorian gate, plans and elevations in the reconstruction of carlo promis (tav. v) built in 25 bc, is still in excellent condition and is formed by two defensive parallel walls of about 13 m high (compared to the street level), and in the lower part opened by three arches, the main central one and two smaller on the sides, separated by a parade ground of 12 meters. the external screen is 4.50 meters thick, while the interior has a thickness of 3.45 meters (figure 1). on both edges are visible patrols, bounded by arched windows and defended by two towers. the central arch, which measures about 7 feet of light, it was intended for carriages to pass, while the two sides, 2.65 meters wide, for pedestrians. the three eastern lanes were closed with draw-gates, still visible in the housing. the two defensive towers have been reworked with a rectangular base over time, the northern most clearly, while the south still preserves some characteristics of roman architecture. the door is constructed of large blocks of conglomerate (natural conglomerate) and the outer face of the eastern part is still covered in gray-green marble (bardiglio of aymavilles), while the remains are of white marble carved with a frieze of the entablature, the cornice of 'corinthian, leaves, ovules, corbels, cornices and arches. to get an idea of the enormous size of the door, one of the most beautiful buildings so well preserved, it must be remembered that the floor of the roman city is at a level of about 2.60 meters below the existing roadbed. during the middle ages, the lords of quart took possession of the door and of the two flanked towers to turn them into a fortified dwelling. above the central eastern arch, at the ancient walkway, was built in the twelfth century a chapel dedicated to ss. trinity. until the eighteenth century a series of building obstructed the central and southern arches, and the only access to the city consisted of the northern passage: this explains why the road axis has moved, thus more oriented to the north. the restoration work carried out in the 30s of last century, involved the demolition of the medieval buildings to the exclusion of the north tower (tower of the lords) and the restoration of the southern regions now occupied by a restaurant. in december 2001, however, it was concluded the restoration of the stone parts of the monument. this intervention was necessary for the progressive deterioration caused by different environmental and climatic factors of stone masonry consisting mostly of conglomerate rock. during the execution of the various phases new elements have emerged that have helped to understand the construction phases of the monument itself. 1.2 methodological approach of the project within the vast project of analysis of the condition, diagnosis, planning and organization of a rigorous monitoring, consistent with the "conceptual route" started long ago with the superintendency of cultural and environmental ___________________________________________________________________________________________________________ geoinformatics ctu fce 316 heritage of the region of valle d'aosta within the frame of other joint projects (see for example the relief of the city of aosta urbica, the finding of early medieval frescoes of the collegiate church of saint orso in aosta, the survey of the capitals in the cloister of the collegiata and the arch of augustus ) [3, 4, 5, 6, 7, 8, 9], the institute for technologies applied to cultural heritage (itabc) of cnr in rome has developed a rigorous 3d geometric documentation of the praetorian gate which is the basis on which then map all information on the conservation status of the monument. the ultimate aim of the project is to obtain, through the use of innovative techniques, an operational tool for the analysis of the building that, used by various technicians (archaeologists, architects, historians and restorers) in the survey phase, will became a valid support in the monitoring project management. this operational tool will configure then, first, as an instrument of research, for interactive questions on various topics (changes of degradation parameters for the component materials, colour, etc. .) so to extract necessary information to assess the state of preservation. it will also be a monitoring tool of significant parameters, so to be able to derive comparative assessments on used materials time behaviour, the kinetics of degradation and therefore the effectiveness of carried out interventions. with reference to the foregoing, the three main objectives are: the creation of an accurate survey , detailed from both the geometric and the radiometric information, and a documentation package that, through the use of innovative methodologies and technologies, ensures added value of research and testing; setting up an operational tool for the analysis of the monument which brings together all the heterogeneous data that would complement the various actors in the learning phase, being a valuable support in monitoring and conservation; the transfer of know-how to the same administrative authorities involved in the project by carrying out a phase of training on job designed to make them fully autonomous in the use of an operational tool also configured as a function of possible implementations and upgrades of documentation relating to the monument, with a view to its scheduled conservation. with reference to the overall objectives of the project, aimed at the optimization and the easy management of information, coming from different disciplines it was possible to identify a specific operational path approach involving several distinct and defined phases. these steps can be summarized as follows: acquisition of data: geometric survey; visual analysis (types of degradation, building materials, etc.). non-invasive diagnostic tests; historical-critical analysis and documentary research. integration and data management: development and optimization in a specific information system (arkis); identification of routes of investigation, interrogation and observation. of the different phases in which the project is articulated, this paper specifically wants to account the performed survey and the consistency of the acquired and processed data, and trace the lines of a future development. 2. the survey 2.1 the geometrical survey: general criteria the three-dimensional survey of the praetorian gate, built according to innovative paradigms, so in this sense is completely non-existent at present, has been calibrated (in terms of detail, rendering, scale, etc..) on the same type of artefact to be investigated and, in relation with the primary purposes that the project intent arises, the used techniques have provided some essential traits such as reliability, accuracy and measurability of the returned data [2, 3, 4]. it also has three-dimensional requirements: the possibility of a stereo metric control of the monument is indeed very timely, given the important purpose of documentation. the obtained three-dimensional models, made available to the administration, will be further elaborated in a later date, for the acquisition of a number of useful additional information so to increase knowledge for the conservation (relations between the different parts, alignments of the body wall, any misalignments, etc.) and then to plan additional surveys and analysis preliminary to the monitoring of the artefact and any simulations of restoration. finally, allows the production of a detailed documentation, not only in terms of geometry (quantity) but also in terms of radiometry (qualitative), and therefore able to provide information on colour and morphology of the material components, and their possible alteration due to the widespread phenomena of degradation. in this regard, and in regard of the experimental and scientific connotation that we wanted to give to the whole survey, certain criteria have been identified, on which to base the design and implementation of documentation and relief operations. they can be summarized as follows: use and testing of an innovative survey calibrated system, of triplets of images, which can produce accurate three-dimensional scans of the monument, through the production of point clouds with a complete space (xyz coordinates) and colorimetric (rgb values) information; ___________________________________________________________________________________________________________ geoinformatics ctu fce 317 preliminary data acquisition, programming and planning post processing phases in accordance with special needs and/or any priorities. this ensures the superintendent the possibility of creating a complete archive of heterogeneous basic documentation, permitting, at the same time, a different use of the funds available with obvious savings in time and cost; ability to use the captured data (coordinates of points, point cloud, range maps, triplets of high-definition images, monographs, etc..) according to different methods in relation to specific and temporary needs of a technical nature (time and / or costs reduced by implementing a detailed analysis of more in-depth qualitative assessments of the state of consistency, etc..) or administrative (such as the choices that the superintendent will decide whether or not to operate equipment on hw and sw). these are basically two: monoscopic processing by orthogonal mosaics arising from the triplet of photographic images in order to obtain documentation, including paper documentation on which to perform a first level of analysis, being 2d drawings where the altitude (z coordinate) is not considered and therefore, the information appears to be detectable only on the x to y; 3d modeling from which to obtain digital surface models or three-dimensional environments the can be explored, suitable for the management of all processed data and useful information for study and analysis. in both cases, this allows technicians to have a continuous information, characterized by the presence, on the exact "geometry " of the building, of all the qualitative data, with the possibility to add, to the precision of photogrammetric data, morphological qualitative data, related to colour and details of digital images and, therefore, to measure, calculate areas or create a decay thematic legend. 2.2 the geometric survey: techniques and methodologies in terms of techniques and methodologies, the survey has been carried out through the use and testing of an innovative calibrated system of triplets of images, photo 3d scanner, capable of accurate 3d scans of detected objects, with the integration of geometric and colorimetric data without the use of laser scanner [5]. the used technology (zscan, designed and produced by menci software arezzo) [6] allows to obtain point clouds with rgb information from which to develop 3d models at different levels of complexity and scale, starting from the treatment of a discrete number of digital images, acquired in a controlled mode, by using specific equipment and postprocessing the data within a specific software based on an innovative algorithm for image matching. this is a survey system based on the achievement of redundant digital images of the monument made in known conditions: for each portion of the object are captured three different images from three different angles, following a simple set of procedures and through the use of an acquisition system and a photographic camera properly calibrated. the processing of each triplet of images, within a specific software environment which allows the application of a sophisticated algorithm for image processing, based on the principles of stereo-photogrammetry, allows the transformation of the individual pixels of images in a cloud of points of known coordinates, together with color information in rgb format, without the aid of any topographic support. from these point clouds can also be immediately obtained individual range maps, or triangulated mesh and texture of that surveyed individual portions of the object surface. the acquisition system then consists of a hardware, that is a camera with fixed optics appropriately calibrated (to know the value of the mounted lens distortion), a calibrated chassis of 90 cm mounted on a tripod with a rotating head and 3d software based on an algorithm of image analysis which makes it extremely efficient and precise. the digital camera can slide on the precision steel bar, where some holes have been prepared at known distances, which represent the possible positions of the camera itself. it consist basically to catch three shots in succession (left, centre, right) from different locations and with a considerable overlap between the individual shots (at least 30%), with the foresight to define the interval between these positions in relation to distance of the object to be detected and the scale of the wanted detail. the survey system that offers, in conclusion, a number of benefits that can be summarized as follows: flexibility in the acquisition phase since, as been variable the distance between the sockets, it can be optimized depending on the size of the artefacts, the distance from the object and the actual conditions of recovery; speed in the acquisition phase with the ability to make cheaper topographic support (the pursuit of a high degree of accuracy should, however, be too stringent in that medium, using gcp (ground control points); speed during the processing of data acquired for the presence of an appropriate software environment, associated with the system, which allows you to build three-dimensional models, whether in the form of clouds of points of triangular mesh by simply inserting images, and all of the triplet parameters related to recovery; ability to chain, in a development phase, the individual models obtained resulting in a total of three -dimensional object models, measurable, exploring and analyzing navigation in special environments [7, 8]. 2.3 stages of survey and data processing in the specific case of the praetorian gate, a monument of particular complexity for its geometry and the global "urban" scale size, in order to be able to get both the entire roof of the building and the lower part, at present beneath the current street level, and to survey also the top and the intrados of the arches, as well as all the related artifacts, as the tower of lords, has been planned an intervention strategy that integrates different methods of acquisition. specifically, we have ___________________________________________________________________________________________________________ geoinformatics ctu fce 318 surveyed with a huge amount of "controlled" triplets of images from the ground, "free" digital images, with a compact digital camera, positioned on a telescopic pole and images from drones or uavs. the 3d photo scanner technology has, in fact, an extension application which allows a remote-controlled carrier (uav unmanned aerial vehicle) in order to perform aerial shots from different altitudes. using this technology you can scan entire areas of each site with a significant perspective advantage (photo zenith) and obtain extremely accurate plano altimetric reconstructions in terms of geometry and radiometry. placing then the digital camera with an angle of 90 degrees it has been possible to survey the front facade and above the upper parts of the monument otherwise completely inaccessible, ensuring also a more reliable measure of those too foreshortened areas. an accurate topographic survey, both in elevation and plane to collect also a large number of gcp on the monument, which are necessary for the registration of individual point clouds into a single spatial reference system, has completed the stages of data acquisition. the integration between the different generated clouds of points has produced a complex three-dimensional model, at different levels of information density. in all, relief operations have required three days in the field, of which only half for the survey with the uav system. the instrumentation used throughout the campaign has been: pentax total station for the topographic survey leica laser distance meters for direct measurement nikon d-200 (10.2 megapixels ccd, 24mm lens, calibrated in a laboratory certified uni) for the acquisition of point clouds with spatial information (xyz) and colorimetric (rgb) canon s90 digital camera (10.2 megapixel ccd, calibrated in a laboratory certified uni) radio-controlled uav esa-copter aluminum bar zscan calibrated system, mounted on a manfrotto tripod 10 m telescopic pole toshiba laptop for managing the acquisition and archiving of acquired data in real time. during the first post processing phase, from each triplet has been obtained a three-dimensional model, both in the form of point clouds with an rgb value and of a triangle textured mesh, of each portion of the gate. similarly it was possible to produce 3d models in the form of point clouds from all the digital images acquired either "free" or with the uav, by virtue of their high degree of overlap (no more than 50%). these models were then registered each to the other, using data derived from the topographic support, to obtain an overall three-dimensional model, measured, analyzed and explored in a special navigation environment. finally, from the three-dimensional model were obtained ortho-photos of the main facades which can be eventually vectorialized with cad. this documentation was essential during the implementation of the use of arkis to handle all the monument information. the phase of data processing has been developed through the use of two software that are part of the zscan system (zscan zmap and laser). the first software allows the extraction, from every single sequence of triplets of images, of a single points cloud that contains spatial information and colorimetric coordinates xyz and rgb. after checking the equalization of the color between each image, the three images, loaded the fundamental parameters (baseline and calibration files used in optics) are easily processed (figure 2). figure 2: computing environment zscan ___________________________________________________________________________________________________________ geoinformatics ctu fce 319 the procedure at this point, consists of four basic steps: correction of the images through the application of a correction and trinocular matching feature, which makes the software automatically eliminate geometric distortions of lenses; selection of the image area of interest (aoi) to be processed determination of the desired resolution, measured in pixels; production of the cloud of points, one for each triplet, generated automatically thanks to an innovative image processing algorithms. simultaneously, the same software can also automatically create a triangulated and textured surface by a process of the point cloud triangulation. in connection with the accuracy of the survey and of the wanted details, has been adopted as a resolution step a value of 3 pixels, which corresponds to 0.3 mm from point to point of the generated cloud. each single point cloud has been registered with the near ones, and overlap cleaned, so to obtain a unique 3d model (figure 3). the zmap software allows two different types of recording: first, a semi automatic, which is based on the mutual recognition and collimation of significant homologous points between two different points clouds, secondly, a full automatic alignment, with the use of an image matching icp algorithm. these procedures are used both for each model trying to compensate the propagation of errors throughout the process of orientation. the average accuracy obtained has been of 0.5 mm. ___________________________________________________________________________________________________________ geoinformatics ctu fce 320 figure 3: 3d point cloud models with rgb information 3. the integration of heterogeneous data 3.1 the integration and data management arkis as mentioned above, in order to monitor the monument over time, the priority objective of the whole operation is to reach a "critical use" of the acquired data, by optimizing the processes of integration and analysis in specific environments, providing thereby a valuable tool for decision support and the planning of the work of individual technicians. the data, therefore, whether geometric or descriptive of the condition, material, etc. have been structured in an information system designed to integrate complex, heterogeneous information related to various acquisitions at each stage of the cognitive approach to the manufacture (three-dimensional relief, visual analysis and / or instrumental). the system is represented by software called arkis (architecture recovery knowledge information system) developed in language avenue, environment in arcview (esri), the innovative aspect is the transfer of specific functions of gis (geographical information system), to the architectural scale. the system structure permits to directly import the geometric data (raster from ortophotos, cad vector, etc.) obtained during the survey or acquired through scanning other documentation that may exist in archives [9]. on this geometric basis, restorers, or others involved in the project, can draw or study the different themes, specially configured to meet the requirements of the investigation (mapping of changes in degradation, stratigraphy, past restoration etc.). to these themes, developed in close collaboration with technician in charge, are associated alphanumeric information previously collected by appropriate analysis (instrumental, "visible", documentary, etc.); the system arkis allows, in fact, being a gis, to interrelate, through a specially configured interface, the description given in the chart with the 3d surveyed area (figure 4). ________________________________________________________________________________ geoinformatics ctu fce 2011 321 figure 4: arkis – consultation phase data on the east side of the praetorian door the topics and related information is spatially and uniquely determined trough the topology tool, basis of gis technology. the use of the arkis system has been part of the training on job project done in close collaboration with operators of the administration involved, in order transfer the know-how which makes the administration of cultural heritage of aosta completely autonomous in the use of the system. the system arkis tool, once available on web, will allow restorers to access and view the data directly from the work site; the operators of the superintendent will also record all data in time for laboratory analysis and restoration, with a view to its conservation budget and schedule. 4. conclusions the work as far as here presented remarks how “traditional” ways can still be pursued and enforced as valid applications to survey cultural heritage. systems based on image capture and processing, for digitization and 3d model reconstruction can be widely applied to different scale archaeological artifacts, also allowing common users to quickly understand their configuration and peculiar characteristics [10]. digital photogrammetry has given satisfactory results in terms of surveyed number of points and precision in the location of the acquired surfaces, taking advantage of uav technology. moreover digital quick photogrammetry – photo-scanner achieving at the same time both the geometric and the color documentation (points clouds with rgb color information), photo-realistic 3d models could be easily outputted on as a high quality level as of those created from laser scanner‟s dataset trough a longer post-processing texturing work. morphological details or components materials and colorimetric definitions so could be extensively exploited in further analysis by specialists from various technical fields. finally is to underline the low cost of this technology which offers to peripheral museums or small public administration the opportunity to plan surveys and preservation of own artifacts otherwise not possible if more expensive technologies and methodologies, as laser scanner sensors, are involved, that question us on ethical responsibilities of the use of too expensive hardware in cultural heritage knowledge. 5. references [1] promis, c., 1ř62. le antichità di aosta, capo vii, § 1. porta pretoria nello stato presente, stamperia reale, torino, 1862, 142-156. [2] drap, p., sgrenzaroli, m., canciani, m., cannata, g., seinturier, j., 2003. laser scanning and close range photogrammetry: towards a single measuring tool dedicated to architecture and archaeology, in proceeding isprs symposium. [3] salonia, p., scolastico, s., bellucci, v., 2006. laser scanner, quick stereo-photogrammetric system, 3d modelling: new tools for the analysis and the documentation of cultural archaeological heritage, in proceedings of 2nd international conference on remote sensing in archaeology, rome. ________________________________________________________________________________ geoinformatics ctu fce 2011 322 [4] salonia, p., bellucci, v., scolastico, s., marcolongo, m., leti messina, t., 2007. 3d survey technologies for reconstruction, analysis and diagnosis in the conservation process of cultural heritage, in proceedings of cipa 2007 – xxi international symposium. anticipating the future of the cultural past, 1-6 october 2007, athens, greece. [5] salonia, p., leti messina, t., marcolongo, a., scolastico, s., 2009. three focal photogrammetry application for multi-scale and multi-level cultural heritage survey, documentation and 3d reconstruction, in proceedings of the 22nd cipa symposium 2009, kyoto. [6] menci software homepage. http://www.menci.com/ . [7] salonia, p., leti messina, t., marcolongo, a., scolastico, s., 2009. survey and 3d reconstruction of the st. orso capitals in aosta, through three-focal photogrammetry, in proceedings of the 10th international symposium on virtual reality, archaeology and cultural heritage vast 2009. [8] the cenobium project's homepage 2009. http://cenobium.isti.cnr.it/ index.php. [9] salonia, p., 2003. strumenti informatici innovativi di ausilio alla conservazione del patrimonio storico architettonico: problemi di organizzazione, diffusione e gestione dati, in rossi m., salonia p., a cura di, , comunicazione multimediale per i beni culturali, addison-wesley, milano 2003. [10] martinelli, m., 2006. passato e futuro del 3d archeologico, dalle foto stereoscopiche al computer per l‟architettura antica, arkos, 16, pp. 18-23. http://cenobium.isti.cnr.it/cenobium.php?site=aosta&lan=it utilization of beam and nest open source toolboxes in education and research markéta pot̊učková charles university in prague, faculty of science department of applied geoinformatics and cartography mpot natur.cuni.cz eva štefanová charles university in prague, faculty of science department of applied geoinformatics and cartography stefano1 natur.cuni.cz keywords: esa open source toolboxes, beam, nest, snow monitoring abstract european space agency (esa) provides several open source toolboxes for visualization, processing and analyzing satellite images acquired both in optical and microwave domains. basic ers & envisat (a)atsr and meris toolbox (beam) was originally developed for easier handling envisat optical data. today this toolbox supports several raster data formats and datasets collected with other eo instruments such as modis, avhrr, chris/proba. the next esa sar toolbox (nest) has been created for processing radar data acquired from different satellites such as ers 1&2, envisat, radarsat or terrasar x. both toolboxes are suitable for the education of the basic principles of data processing (geometric and radiometric corrections, classification, filtering of radar data) but also for research. possibilities for utilization of these toolboxes in remote sensing courses based on two examples of practical exercises are described. use of the nest toolbox is demonstrated on a research project dealing with snow cover detection from sar imagery. introduction european space agency (esa) has been focused on development of open source toolboxes for professional processing of remote sensing data for more than a decade. at the same time the agency pays attention to educational activities such as eduspace that offers students and teachers of secondary schools examples of earth observation (eo) data and simple software tools for image processing (leoworks, [1]). nowadays, 29 toolboxes are available from the esa homepage [2]. in the first place, they have been developed to help users to read, process and manage data of esa eo missions but recently some products of so called ”third party geinformatics fce ctu 2010 37 pot̊učková m., štefanová e.: utilization of beam and nest open source toolboxes in education and research missions” can be processed with these software tools as well. from the point of view of teachers and researchers dealing with optical and microwave remote sensing, the following toolboxes may be found most interesting and useful (esa toolboxes [2]): � beam (basic ers & envisat (a)atsr and meris toolbox) is primarily suited for viewing and processing of envisat optical data. � nest (next esa sar toolbox) enables viewing and processing of sar data such as ers or envisat/asar but also new sensors as terrasar-x and radarsat-2. � polsarpro offers tools for manipulating sar polarimetric data. it also includes educational material about sar polarimetry. � enviview contains visualization tools for envisat data. � esov (earth observation swath and orbit visualization tool) gives an overview on instrument swaths of all esa earth observation satellites, e.g. where and when satellite data capture takes place and where and when ground contact is possible. in addition to the mentioned software tools, there are two others that are very useful for searching data from esa eo missions and some third party missions (both archive data and planned acquisition), namely eolisa [3] and descw [4]. while the former can also be used for ordering data, the later is suited for sar acquisitions and finding interferometric pairs with proper geometric and temporal parameters. the purpose of this paper is to show a possibility for using the beam and nest toolboxes for the education of remote sensing on the university level and in research. first, a short characteristic of both toolboxes is given. examples of two practical exercises dealing with spectral characteristics of vegetation and mapping snow cover area from envisat optical data (meris and aatsr) follow. use of the nest toolbox is demonstrated on a research project focused on snow cover monitoring from sar imagery. beam and nest toolboxes beam basic ers & envisat (a)atsr and meris toolbox (beam) was originally developed for viewing and processing of envisat optical data. based on an esa project, the software development has been carried out by the private company brockmann consult. since the start of the project in 2002, several versions have been released (v. 4.8 in july 2010). beam is programmed in java which enables its use on different platforms. there are versions for windows, linux, mac os and solaris operation systems. the users can develop and implement their own modules. the user community is rather wide; it has a lively user forum. beam consists of three main components [5]: � visat, a visualization, analyzing and processing desktop application comprising functions for – data import and export (formats envisat n1, ceos, hdf, geotiff, beamdimap), geinformatics fce ctu 2010 38 pot̊učková m., štefanová e.: utilization of beam and nest open source toolboxes in education and research – displaying images including navigation window, layer manager and enhancement tools, – basic image analysis: histogram, scatter plots, band transect profile, definition of points of interest (so called pins), spectrum view in pin positions, geometric corrections – generation of image products: reprojection, orthorectification, connecting images to a mosaic, band arithmetic, band filtering, – advanced processing: cluster analysis, spectral unmixing and special functions suited for processing meris data (smile and smac correction, radiance to reflectance processor, ndvi processor, cloud probability processor). all these modules are accessible both from visat and from a command line, � convertor from the internal beam-dimap format to geotiff, hdf-5 or rgb images, � java api for development of new, user defined modules. based on our experience, the visat graphical interface is user friendly. it also has the ability to display gis data (shapefiles) and other datasets available via wms. basic image processing tools can be easily implemented into introductory remote sensing courses. the advanced tools are mostly used for research or in master degree courses in remote sensing and image processing. nest the purpose of the next esa sar toolbox (nest) is to provide functions for viewing, analyzing and processing synthetic aperture radar (sar) data from esa satellites (ers 1 & 2, envisat and future sentinel). it also supports data of other sar sensors such as radarsat 1 & 2, terrasar-x or alos palsar. the user interface of nest is based on beam visat but it comprises a set of functions for sar data processing that are also a part of the basic envisat sar toolbox (best). nest is being developed under an esa project by a canadian company array systems computing, inc. the latest version nest 4a-1.5 (october 2010) comprises among others following sar processing functions [6]: � radiometric calibration, � speckle filtering, � multilooking, � coregistration, � terrain correction, � transformation from slant range to ground range. the newest version also enables insar processing (coherence estimation, generating interferogram) and it is therefore an alternative to the doris software package (doris insar [7]) developed for the linux operating system. moreover, a set of functions for ocean applications have been implemented such as object and oil spill detection. geinformatics fce ctu 2010 39 pot̊učková m., štefanová e.: utilization of beam and nest open source toolboxes in education and research esa toolboxes in rs courses at the department of applied geoinformatics and cartography of charles university in prague, beam and nest software packages have recently started to be utilized in the remote sensing courses to give students some alternatives to commercial (and costly) software packages. the beam toolbox is mostly used for processing of envisat/meris (full resolution level 1 [8]) and aatsr (level 1 [9]) images that are continually collected at the department via a receiving station established in cooperation with esa. the basic functions of the beam toolbox are explained during a practical exercise of the course application of geoinformatics that is compulsory for students on the master level specialized in cartography and geoinformatics. from the academic year 2010/2011, students will get more familiar with optical envisat data and the beam toolbox by solving two tasks in the scope of the course extraction of information from remote sensing data. both tasks are based on results of diploma theses recently defended at the department and they are at the same time a part of ongoing research activities. spectral signatures of selected vegetation species the main advantage of meris data in monitoring of vegetation is in their high spectral and temporal resolution. the area of the czech republic is covered each three days; the only problem is the cloud cover that can be rather dense in some months of the year. the goal of the described practical is to compare spectral curves of different kind of crops (e.g. rapeseed, maize, sugar beet, hops) and other types of vegetation (e.g. coniferous and deciduous forests) within the year and in connection to different geographical and climatic conditions (altitude, orientation, average temperatures). a data set including images with minimal cloud cover (less than 20%) from the vegetation season (monthly scenes from april to september 2009) and a map of land cover (including crops) derived from meris data by classification and verified using in situ data is available. as a data set comprises multitemporal data, one of the goals is to teach the students how to deal with a radiometric correction and how to verify its result. the geolocation is based on orbital parameters but it should be checked by the students as well (mutual shifts of images should not exceed 1 pixel). the original scenes are already trimmed to the area of the czech republic according to the previously produced land cover map. ancillary data about altitude and slope orientation were calculated in gis software from srtm data. moreover, a map of climatic regions of the country will be used [10]. the exercise in beam consists of following steps: � radiometric corrections of meris data (spectral shift – smile correction, atmospheric correction – smac [11]), � creating spectral curves for selected crops and other vegetation according to the legend of the land cover map using beam functions ”pin manager” and ”spectrum view” at the selected positions that differ with respect to altitude, slope orientation and climatic region. figures 1 and 2 show one of possible outputs from the exercise. students will produce and compare reflectance curves of different types of vegetation, evaluate an influence of geographic and climatic conditions on spectral characteristics of vegetation and draw some conclusions and recommendations for classification of meris images for the purpose of crop or forest geinformatics fce ctu 2010 40 pot̊učková m., štefanová e.: utilization of beam and nest open source toolboxes in education and research monitoring. the students will learn how to work with multitemporal data and they will get better understanding of spectral behavior of vegetation in relation to different natural conditions. figure 1: reflectance curves of different vegetation types derived from meris data on 21st april 2009 (source: [12]) figure 2: reflectance curves of rapeseed in different climatic regions derived from meris data on 3rd may 2009 (source: [12]) snow cover determination using meris and aatsr data reflectance of snow and ice is very high in visible (vis) and near infra red (nir) part of the spectrum. similar spectral behavior is typical for clouds. that is why it is not possible geinformatics fce ctu 2010 41 pot̊učková m., štefanová e.: utilization of beam and nest open source toolboxes in education and research to distinguish between snow and cloud cover in vis and nir images. reflectance of snow decreases considerably in the short wave infrared (swir) part of the spectrum (intervals 1.55 µm to 1.75 µm and 2.1 µm to 2.3 µm) while the reflectance of clouds does not change considerably in comparison to shorter wavelengths [13]. the sensor meris acquires data in 15 spectral bands in the range from 390 nm to 1040 with approximately 300 m spatial resolution and it is not therefore suitable for snow detection in case of a partial cloud cover. the aatsr sensor has a lower spatial resolution (1 km) but it includes also a spectral band in the swir part of the spectrum (1.6 µm). the goal of the proposed exercise is to teach students how to combine these two sources of data and how to derive a snow product from them. the exercise contains following steps: � creating a sub-scenes of original data covering the area of the czech republic, � radiometric corrections of meris data (spectral shift – smile correction, atmospheric correction – smac [11]), � coregistration of meris and aatsr images (function ”collocation”); an image from the meris sensor is chosen as a ”master” and an aatsr image is resampled into it, � visualization of a coregistered product, � derivation of a snow and cloud mask based on thresholds, � evaluation of the created snow mask based on comparison with other snow products (e.g. based on modis data) or in situ measurements. visualization of a coregistered product practical examples show the advantage of the aatsr swir band for discrimination between clouds and snow as depicted in figure 3. beam visualization tools can be used for this purpose. figure 3: example of color composition of meris and aatsr data for visualization of snow and cloud cover (source: [14]) derivation of snow and cloud mask based on thresholds using band arithmetic, following indexes can be calculated: � cloud index [15] ci = (aatsr 3.7 – aatsr 11)/(aatsr 3.7 + aatsr 11) geinformatics fce ctu 2010 42 pot̊učková m., štefanová e.: utilization of beam and nest open source toolboxes in education and research � meris normalized differential snow index [16] mndsi 1 = (mer 0.865 – mer 0.885)/(mer 0.865 + mer 0.885) � normalized differential snow index [14] ndsi = (mer 0.665 – aatsr 1.6)/ (mer 0.665 + aatsr 1.6) � normalized differential vegetation index ndvi = (mer nir – mer r)/( (mer nir + mer r) the number behind the sensor abbreviation corresponds to the spectral band central wavelength in µm. several meris spectral bands can be used for the calculation of ndvi. in the algorithm presented below bands 7 and 10 (0.665 µm and 0,753 µm) are used [14]. the goal of the exercise is to evaluate suitability of the above listed indexes in combination with reflectance (r) and bright temperature values for snow and cloud determination and to empirically derive thresholds for discrimination of both of these features. relevant literature, e.g. [15], is available. an example of derived masks and an applied classification algorithm are shown in figure 4. figure 4: result of snow classification from meris and aatsr data on 9th january 2009. the classification is based on combination of thresholds on cloud index, aatsr reflectance values at the spectral band 1.6 µm, meris normalized differential snow index and ndvi. (source: [14]) snow detection from envisat/asar data the extent of snow cover area (sca) is important for hydrological models, especially for flood prediction due to snow melt. discrimination of sca from optical and radar images is one of tasks of the project ”demonstration of esa environments in support to flood risk earth observation monitoring” (floreo). within this project, sca determination based on observations from both optical and radar sensors were tested. utilization of optical images is problematic due to cloud cover that is rather frequent in winter period (e.g. average cloud cover in february 2009 was 93%). our research therefore focused on the determination of sca from envisat asar images. with the exception of the final classification and visualization of the results, all processing steps were carried out in the nest software package. the influence of snow on radar backscattering in the c band has been known for many years (e.g. [17]). while the impact of volume scattering in dry snow is negligible and the nominating geinformatics fce ctu 2010 43 pot̊učková m., štefanová e.: utilization of beam and nest open source toolboxes in education and research backscatter comes from ground surface, presence of liquid water in the snow pack causes a significant decrease of backscatter in comparison to snow free or dry snow conditions. thus, wet snow can be discriminated if an image acquired in snow melt period is compared with an image collected before or after this period [18]. our investigations on the asar wide swath mode, single polarized product showed that it was the best to choose images with dry snow or frozen, snow free earth as a reference. image processing steps for wet snow discrimination are depicted in figure 5. with the exception of the last classification, nest functions were utilized in the implementation of proposed methodology. figure 5: main processing steps of classification of wet snow from a pair of sar images. intensity values are recalculated to backscatter coefficient σ0 in the image correction step according to the formula [6]: σ0 = dn2i,j k sin(αi,j) where dn2i,j is the pixel intensity for pixel i, j k is the absolute calibration constant αi,j is an incidence angle in order to minimize the speckle effect, the gamma filter using the window of 7x7 pixels was applied on the data. in the following step, a nest function enabling automated coregistration of images was applied. at the beginning of the floreo project (2008), this function was not available. coregistration was therefore performed based on manual measurement of control points (mostly water bodies) followed with a polynomial transformation in the pci software. an implementation of the coregistration function into nest created a possibility for a complete automation of all processing steps. the terrain correction function creates an orthorectified image in the wgs 84 geographic coordinates. it requires orbital parameters and dem as input values. orbital parameters are included in the original data file. nest contains a link to the srtm dem. a difference image is created using the nest band arithmetic function. differences in backscatter are calculated as ∆σ0 = σ0snow − σ0ref . a final classification is based on setting a threshold for ∆σ0 values. in literature a threshold of -3 db is recommended [18], [19]. geinformatics fce ctu 2010 44 pot̊učková m., štefanová e.: utilization of beam and nest open source toolboxes in education and research our investigations showed that this threshold is reliable in mountainous areas with sparse vegetation. for the purpose of sca mapping over the whole country the threshold value was derived for each pair of snow and reference images based on ∆σ0 in the surrounding of meteorological stations where snow height and air temperature are measured or based on snow maps derived from optical images [20]. although this solution brought some improvements, in the heterogeneous landscape of the czech republic a necessity of finding threshold values according to different land cover classes is obvious. after setting a threshold for ∆σ0, a probability snow map (see figure 6) is created using a sigmoid function f(x)=50-50tanh[a(x+3)]. parameters a (slope) and x (50% snow probability) are derived from a histogram of differences between a reference image and an image with snow. figure 6: result of wet snow mapping from asar image on 24th march 2009 (black 0% snow, white 100 %). the red dots show positions of meteorological stations. urban areas are not masked out and appear in bright tones. quality assessment of snow maps from asar data was performed by a comparison with snow maps derived from modis images with resolution of 250 m. the total accuracy of classification was 92% in the image from 24th march 2009 (figure 7). the values in contingency tables in this and other two tested datasets showed that the classification result from asar images underestimated the snow cover area. this result is probably influenced by the fact that all snow pixels from optical data were taken into calculation regardless of wetness of snow. a possible solution to this problem gives post-processing of radar data. the presented classification step of the processing was done in matlab, visualization of images in arcgis. as it was mentioned before, the whole previous processing was carried out in nest. the first three processes were chained into one function (using the nest ”graph builder”) that can run in a batch mode. terrain correction was not added to this function due to problems with the size of original scenes. images have to be cut to the area of interest first. this step is done manually so far but its automation is in progress. geinformatics fce ctu 2010 45 pot̊učková m., štefanová e.: utilization of beam and nest open source toolboxes in education and research figure 7: overlay of classification result of asar data from figure 6 and a snow (blue) and cloud (yellow) mask derived from modis image. conclusion the described examples demonstrated only three possibilities of the utilization of the beam and nest open source esa toolboxes in research projects and in remote sensing courses. the availability of software solutions as well as an opportunity to develop new modules and share ideas and problems in user forums gives a good base for further development of the presented tools. the main advantage and at the same time disadvantage of the software packages is the java platform. it makes it easy to create and implement new modules and it is suitable for educational purposes (not only in remote sensing applications but also for software development). on the other hand processing time is rather long in case of larger data sets which is not very convenient especially in case of operationally oriented projects. nevertheless, this ”weakness” is minor especially in educational applications and experimental work when smaller datasets are usually sufficient. in combination with an easy accessibility of esa eo data, the mentioned image processing tools create a valuable and solid base for research and education in remote sensing. acknowledgement the presented work was supported by the esa pecs project ”demonstration of esa environments in support to flood risk earth observation monitoring”(floreo). the envisat/asar images were obtained within the esa cat-1 project ”use of asar data for snow cover and soil moisture monitoring” (c1p.6052). geinformatics fce ctu 2010 46 pot̊učková m., štefanová e.: utilization of beam and nest open source toolboxes in education and research references 1. leoworks http://www.eduspace.esa.int/eduspace/leoworks/leoworks.asp 2. esa toolboxes http://envisat.esa.int/earth/www/category/index.cfm?fcategoryid=36 3. eolisa http://earth.esa.int/eoli/eoli.html 4. descw http://envisat.esa.int/earth/www/object/index.cfm?fobjectid=3824 5. beam http://www.brockmann-consult.de/cms/web/beam/ 6. nest http://liferay.array.ca:8080/web/nest 7. doris insar http://enterprise.lr.tudelft.nl/doris/ 8. meris data products and algorithms http://envisat.esa.int/handbooks/meris/cntr2.htm#eph.meris.prductslgr 9. aatsr products and algorithms http://envisat.esa.int/handbooks/aatsr/cntr2.htm#eph.aatsr.prodalg 10. moravec, d., votýpka, j. (1998): klimatická regionalizace české republiky. karolinum, praha, 87 p. 11. smac1 12. malikova, l. (2010): the application of high temporal satellite image data for designation of the spectral characteristic of vegetation, m.sc. thesis, charles university in prague, faculty of science, manuscript, 77 p. 13. riggs, g., hall, d.k. (2004): snow mapping with the modis aqua instrument, 61 eastern snow conference, portland, maine http://modis-snow-ice.gsfc.nasa.gov/pap esc04gar.pdf 14. zelenkova, k. (2009): determination of snow cover using remote sensing data, m.sc. thesis, charles university in prague, faculty of science, manuscript, 83 p. 15. tampellini, m. l. et al. (2003): monitoring snow cover in alpine regions through the integration of meris and aatsr envisat satellite observations2, meris user workshop 2003, esa-esrin, frascati, italy 16. schlundt, c. et al. (2010): synergetic cloud fraction determination for sciamachy using meris3, atmospheric measurement techniques discussion, vol. 3, pp. 3601–3642 1 http://www.brockmann-consult.de/beam/doc/help/smac/smacalgorithmspecification.html 2 http://envisat.esa.int/workshops/meris03/participants/219/paper 38 tampellini.pdf 3 http://www.atmos-meas-tech-discuss.net/3/3601/2010/amtd-3-3601-2010-print.pdf geinformatics fce ctu 2010 47 http://www.eduspace.esa.int/eduspace/leoworks/leoworks.asp http://envisat.esa.int/earth/www/category/index.cfm?fcategoryid=36 http://earth.esa.int/eoli/eoli.html http://envisat.esa.int/earth/www/object/index.cfm?fobjectid=3824 http://www.brockmann-consult.de/cms/web/beam/ http://liferay.array.ca:8080/web/nest http://enterprise.lr.tudelft.nl/doris/ http://envisat.esa.int/handbooks/meris/cntr2.htm#eph.meris.prductslgr http://envisat.esa.int/handbooks/aatsr/cntr2.htm#eph.aatsr.prodalg http://www.brockmann-consult.de/beam/doc/help/smac/smacalgorithmspecification.html http://modis-snow-ice.gsfc.nasa.gov/pap_esc04gar.pdf http://envisat.esa.int/workshops/meris03/participants/219/paper_38_tampellini.pdf http://envisat.esa.int/workshops/meris03/participants/219/paper_38_tampellini.pdf http://www.atmos-meas-tech-discuss.net/3/3601/2010/amtd-3-3601-2010-print.pdf http://www.atmos-meas-tech-discuss.net/3/3601/2010/amtd-3-3601-2010-print.pdf pot̊učková m., štefanová e.: utilization of beam and nest open source toolboxes in education and research 17. rott, h., künzi, k.f. (1983): remote sensing of snow cover with passive and active microwave sensors, hydrological applications of remote sensing and remote data transmission, iahs publ. no. 145 http://iahs.info/redbooks/a145/iahs 145 0361.pdf 18. nagler, t., rott, h. (2000): retrieval of wet snow by means of multitemporal sar data4, ieee transactions on geoscience and remote sensing, vol. 38, pp. 754 – 765 19. storvold, r., malnes, e., (2004): near realtime snow covered area mapping with envisat asar wideswath in norwegian mountaineous areas5, envisat and ers symposium, salzburg 20. potuckova m., jedlicka, j. (2010): snow determination from envisat asar: a case study from the czech republic, esa living planet symposium, bergen 4 http://esamultimedia.esa.int/conferences/98c07/review open/papers/99mb19.pdf 5 http://www.norinnova.no/content/download/1448156/3013866/file/igarss 2004 sca rune.pdf geinformatics fce ctu 2010 48 http://iahs.info/redbooks/a145/iahs_145_0361.pdf http://esamultimedia.esa.int/conferences/98c07/review_open/papers/99mb19.pdf http://esamultimedia.esa.int/conferences/98c07/review_open/papers/99mb19.pdf http://www.norinnova.no/content/download/1448156/3013866/file/igarss_2004_sca_rune.pdf http://www.norinnova.no/content/download/1448156/3013866/file/igarss_2004_sca_rune.pdf comprehensive approach for building outline extraction from lidar data with accent to a sparse laser scanning point cloud comprehensive approach for building outline extraction from lidar data with accent to a sparse laser scanning point cloud petr hofman and markéta potůčková∗ department of applied geoinformatics and cartography, faculty of science, charles university, czech republic ∗corresponding author: marketa.potuckova@natur.cuni.cz abstract. the method of building outline extraction based on segmentation of airborne laser scanning data is proposed and tested on a dataset comprising 1,400 buildings typical for residential and industrial urban areas. the algorithm starts with setting a special threshold to separate roof points from bare earth points and low objects. next, local planes are fitted to each point using ransac and further refined by least squares adjustment. a normal vector is assigned to each point. similarities among normal vectors are evaluated in order to assemble planar or curved roof segments. finally, building outlines are formed from detected segments using the α-shapes algorithm and further regularized. the extracted outlines were compared with reference polygons manually derived from the processed laser scanning point cloud and orthoimages. area-based evaluation of accuracy of the proposed method revealed completeness and correctness of 87 % and 97 %, respectively, for the test dataset. the influence of parameters like number of points per roof segment, complexity of the roof structure, roof type, and overlap with vegetation on accuracy is evaluated and discussed. the emphasis is on point clouds with the density of 1 or 2 points/m2. keywords: airborne laser scanning, building outline, low point density. 1. introduction increasing demand on 3d information and variety of its applications ranging from architecture, engineering, real estate, telecommunication or tourism and development in technologies like airborne laser scanning (als) and multi image processing have triggered research on algorithms for derivation of 3d building models from point clouds and on related issues such as accuracy assessment, transferability of the algorithms to different datasets, fusion of lidar and image data. a comprehensive review of about 100 papers dealing with building extraction from als data published in the last two decades, challenges and possible research trends can be found in [17]. point density belongs to the issues discussed. increasing point density increases also the accuracy of extracted building outlines [17]. the extraction algorithm mentioned in [16] requires conversion of an original point cloud to a raster that is further processed in the ecognition software (object based image analysis, [18]); densities lower than 5 points/m2 show very low accuracy in the presented case. on the other hand, sparser lidar point clouds are acquired for nationwide mapping and therefore their use for building extraction and modelling has been investigated (e.g. [14, 4, 2]). this article focuses on automated process of building outline extraction applicable on a sparse lidar point cloud with the point density about 1–2 points/m2. inputs include an original irregular point cloud containing only 3d coordinates of collected points plus digital terrain model (dtm) of the entire area. the algorithm is based on segmentation in the parameter geoinformatics fce ctu 16(1), 2017, doi:10.14311/gi.16.1.6 91 http://orcid.org/0000-0002-8760-790x https://doi.org/10.14311/gi.16.1.6 http://creativecommons.org/licenses/by/4.0/ p. hofman and m. potůčková: building outline extraction from lidar data domain (the categories of approaches for the segmentation of surface features can be found e.g. in [9]). to derive building roof surfaces, similar approach as shown in [6] is used; however, random sample consensus (ransac) algorithm [3] and similarity between point attributes (normal vectors) in local neighbourhood are utilized. in such a way, the proposed algorithm is applicable to lower point densities and non-planar surfaces. the influence of parameters like number of points per roof segment, complexity of the roof structure, roof type, and overlap with vegetation on accuracy is evaluated and discussed. development of the algorithm was initiated by the czech office for surveying, mapping, and cadastre. 2. data and test sites the laser scanning point cloud was acquired using the lms q680 scanner of the riegl laser measurement systems gmbh [12]. the overlap of the scanned lines was 50 %. the average point cloud density was 1.5 points/m2 and the estimated accuracy in the point elevation was 0.1 m. in order to cover the most common roof types in the czech republic, two different urban areas, municipalities of ctiněves (50°22’29” n, 14°18’26” e) and pardubice-polabiny (50°03’05” n, 15°45’40” e), were chosen (figure 1). ctiněves is a small village in the ústí nad labem region featuring typical rural architecture. smaller buildings with mostly gabled roofs are often partially overshadowed by vegetation. polabiny form part of regional capital pardubice. there are large blocks of flats and industrial buildings with flat roofs as well as residential areas comprising detached houses with more complicated roof constructions. in total, 1400 buildings and building complexes were processed. (a) (b) figure 1: subsets of point clouds from the test sites (a) ctiněves and (b) polabiny. for the purpose of visualization the point clouds were automatically classified in scop++ (dark green – ground, light green – vegetation, red – roofs, white – not classified). 3. methodology the proposed solution of building outline extraction is based on processing of irregular point cloud representing roof structures. the extracted outlines therefore correspond to roof outlines and not to groundplans. moreover, it is assumed that roofs consist of planar (flat, gabled, hipped roofs) or curved (spherical, cone roof) segments and that any roof can be assembled geoinformatics fce ctu 16(1), 2017 92 p. hofman and m. potůčková: building outline extraction from lidar data from such segments. a data-driven approach is applied; no pre-defined building models are used. figure 2 shows a general workflow of a developed methodology. first, points on bare earth and points with low height are filtered out. second, a plane fitting to its neighbourhood and a corresponding normal vector are assigned to each point on artificial and natural objects. based on similarity of normal vectors, points are divided to segments. segments boundaries are geometrically linked together to form a roof outline. finally, the outlines obtained are regularised. a detailed description of each processing step follows. the proposed methodology and its implementation require setting of several parameters. their values are in the current implementation set automatically depending on the point cloud density and type of urban area. the values were determined empirically and tested so that the algorithm was transferable among datasets with different densities and types of buildings. figure 2: general workflow of the proposed building (roof) outline extraction algorithm. 3.1. pre-processing prior to the roof outline extraction, a dataset representing only bare earth points (dtm) is required as an input. existing filtering algorithms, e.g. [7, 1], are nowadays used operationally (e.g. software packages terrascan [15], scop++ [20], lastools [11]) and will not be discussed further in this text. the program scop++ [20] was applied for filtering of the test data. any dtm derived by other means could be utilised provided its resolution and accuracy will not worsen the accuracy of derived roof outlines. 3.2. object detection building extraction from an original irregular point cloud can be a demanding computational operation. thus, reducing the dataset to building candidates is helpful. with the use of dtm, the height (above ground) of each point is calculated. points with height less than 2 m are excluded from further calculations. remaining point clusters are delineated with closed, in general non-convex polygons by means of the α-shape algorithm [8]. the next processing step aims at eliminating clusters that do not represent buildings but high vegetation, pylons, cars or their combination. geoinformatics fce ctu 16(1), 2017 93 p. hofman and m. potůčková: building outline extraction from lidar data 3.3. plane fitting points belonging to one plane roof segment show the same or very similar direction of normal vectors. in the case of curved surfaces, e.g. conical or cylindrical ones, the direction of normal vectors continuously change. based on these conditions, points not corresponding to buildings can be filtered out. on the other hand, the accuracy of determination of plane parameters and its vectors is essential for successful extraction of buildings. in our approach, a plane is fitted to surrounding of each point. to decrease the computational time, surrounding is defined with a distance threshold that depends on an average point density in the selected area. in the case of the tested dataset, a bounding box with a side of 5 metres was used. such an approach brings 30 to 40 points at the start of plane fitting. the chosen size of the bounding box was sufficient for a reliable definition of a plane even at the corners of the roof segments where in average 5–10 points laying at the same plane were found. first, plane parameters are calculated using ransac [3] and subsequently refined by means of least squares adjustment. fitting a plane to points lying in the middle of a planar roof segment without additional objects (e. g. chimneys, dormers) is trivial. nevertheless, there are often other roof constructions or overlapping vegetation or a point falls on the edge of two or more surfaces. thus, in order to find a plane fitting to most of the points in a given surrounding and to exclude outliers, ransac is applied. in addition to the evaluated point, two random points are iteratively selected, plane parameters are calculated, and distances of all points from this plane are evaluated. assuming that at least one fourth of points in the evaluated point surrounding falls into a searched plane, the number of iterations is determined so that the probability of selecting three initial points from this plane is not lower than 99.9 %. in the case of tested dataset, the number of iteration was set to 110 according to the formula 1 [3]. the used parameter values are in parentheses. k = log (1 − p) log (1 − wn) (1) k number of iterations (according to the parameters below equal to 107; set to 110) p expected probability of fitting a correct plane (99.9 %) w minimal expected number of inliers in the evaluated sample of points (0.25) n number of sought points (2, the third point was the evaluated one) only points with a distance smaller than a set threshold (in our case 0.1 m, i. e. the accuracy in height of the dataset) are accepted. the solution which features the highest number of points (the highest score) falling into the distance threshold is taken as the final one. plane parameters and the normal vector assigned to the evaluated point are recalculated by the least squares adjustment using all points fulfilling the threshold condition. the plain fitting process is shown in figure 3. 3.4. roof face segmentation points that match the criteria mentioned above can be further segmented and grouped into planar roof faces based on similarity in the direction of their normal vectors – points belonging to one plane are attributed with nearly parallel normal vectors corresponding to local geoinformatics fce ctu 16(1), 2017 94 p. hofman and m. potůčková: building outline extraction from lidar data (a) (b) (c) (d) figure 3: local plane fitting and refinement: (a) 2d section of an original point cloud with an evaluated point in green, (b) selected point surroundings, (c) fitting plane after using the ransac algorithm with a distance threshold, (d) final plane refined by means of least squares adjustment. planes formed in the point neighbourhood (figure 4). due to the presence of curved surfaces with continuously changing slope and exposition, the search for similar normal vectors is not performed at once for the whole building but only in the close proximity of the point. thus, also points with even opposite directions of the normal vectors can form one segment provided there is not any discontinuity, i.e. a difference in the angle between the normal vectors exceeding the given threshold. such local determination in similarity of normal vectors also allows for discriminating roof planes having the same slope and orientation but being physically separated. segments that include a low number of points to form a plane in a reliable way (less than 5 in our case) are excluded from the further processing. thus, after this step, only points forming roof surfaces remain. (a) (b) (c) figure 4: point segmentation based on similarity of normal vectors in close proximity: (a) 2d section of an original point cloud and direction of normal vectors assigned to individual points, (b) clustering based on parallelism of normal vectors – points belonging to the left (red) and right (blue) planes, and two points (black) which direction of normal vectors exceed the threshold with respect to their neighbouring points, (c) result of segmentation – points belonging to two roof faces as an input for the final building outline extraction. 3.5. outline extraction and regularization the method of building outline extraction published in [8] is used in the next step. first, the building outlines are derived by means of the α-shapes algorithm that enables extracting also non-convex shapes and holes inside polygons. next, irregular shapes are simplified using the adopted sleeve-fitting algorithm which preserves critical points. finally, the outlines are modified to the most common rectangular building shapes. the dominant building direction geoinformatics fce ctu 16(1), 2017 95 p. hofman and m. potůčková: building outline extraction from lidar data is calculated from the existing outlines. if the orientation of a single line does not significantly differ from the dominant direction (or the direction perpendicular to that), the line is transformed to that required direction (see also [8] or [21]). figure 5 shows an example of the outline extraction and regularization result. (a) (b) (c) (b) figure 5: outline extraction and regularization. (a) outline of a reference building (red polygon) derived by manual editing. (b) outline of a cluster after applying 2 m height threshold (green polygon). (c) outline after the roof segmentation (blue polygon). (d) final regularized building outline (cyan polygon). 3.6. evaluation approach the automatically extracted building outlines obtained by the above described approach were compared with outlines derived manually from colour orthoimages (0.25 cm ground sampled distance) and a surface model calculated from the test point cloud. thus, the stated accuracy and reliability values of outline extraction express only quality of the applied method and do not reflect absolute errors in the data. in order to evaluate the quality of outline extraction, an automated area-based approach is applied. its advantages in comparison with other evaluation methods are discussed in detail in [10]. the outline of each reference building is overlaid with extracted building outlines. overlaying areas are divided into three groups (see also figure 6): • true positive (tp): areas of the reference building that are correctly detected by the automated process. • false negative (fn): areas of the reference building that were not detected by the automated process. geoinformatics fce ctu 16(1), 2017 96 p. hofman and m. potůčková: building outline extraction from lidar data • false positives (fp): areas detected by the automated process that do not match any reference building. (a) (b) (c) (b) figure 6: definition of relation between (a) reference building (red polygon) and detected building area (cyan), (b) the true positive (tp) area (blue), (c) the false positive (fp) area (green) and (d) the false negative (fn) area (magenta) (after [10]). based on these areal values, two quality measures are computed: • completeness: comp = tp/(tp + fn), i.e. ratio of correctly detected building area to the area of the reference building • correctness: corr = tp/(tp + fp), i.e. ratio of correctly detected building area to the total detected building area in addition to the area-based method, the object-based evaluation with a mutual overlap threshold of 70 % and weighting by building area [13] is also applied. in addition to the overall quality, also influence of different parameters of an input point cloud, buildings, and surrounding conditions were evaluated, namely number of points per geoinformatics fce ctu 16(1), 2017 97 p. hofman and m. potůčková: building outline extraction from lidar data roof segment, level of noise in the data, building size, type and complexity of the roof structure, and presence of vegetation. only the area-based method was used for this evaluation. 4. results and discussion 4.1. overall quality the quality of building outline extraction achieved by application of the algorithm described on 1,400 buildings or building blocks is summarised in table 1. the object-based approach only shows whether the building was roughly detected; compared to the area-based evaluation, however, it does not express the geometric similarity between the reference and extracted building outlines. while the object-based completeness shows that 94 % of reference buildings were successfully detected and 99 % of all extracted building outlines match the reference buildings, the area-based quality measures express that 87 % of building area were extracted correctly and only 3 % of building area do not overlay reference buildings. table 1: correctness and completeness of areaand object-based evaluation methods for building extraction. results of the above mentioned building outline extraction algorithm applied on 1,400 buildings in the ctiněves and polabiny test areas. evaluation method completeness correctness object-based 0.94 0.99 area-based 0.87 0.97 considering that only spatial coordinates of laser points were available (without additional information, e.g. multiple echoes or intensity) and the point density was rather low, the building outline extraction of the tested dataset was successful. 4.2. quality in relation to building and point cloud parameters success rate of the extraction chiefly depends on the building size, specifically on the size of roof faces. figure 7a demonstrates the relation between area-based correctness and completeness values and an average number of points per roof segment. similarly, figure 7b shows the relation between the same quality parameters and the size of reference building. first, relation of completeness and observed parameters will be discussed. correctness will be analysed separately at the end of this section. it is obvious that the completeness much depends on the number of points per roof segment. this number is influenced by the density of the original point cloud, building size, and complexity of the roof deck. the role of building size works similarly, it is practically a subset of the first parameter studied. the trends observed are not surprising due to the fact that the proposed approach is data-driven and detection of single roof segments is crucial in the whole extraction process. in the case of small buildings, reliability of an automated decision (whether a point cluster creates a smooth and continuous surface) is very low. with the increasing number of points per roof segment the success rate increases rapidly. starting from 30 points per roof segment, the building outline extraction can be considered satisfactory; completeness exceeds 65 %. geoinformatics fce ctu 16(1), 2017 98 p. hofman and m. potůčková: building outline extraction from lidar data 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 10 20 30 40 50 75 100 200 10000 s u c c e s s r a te ( % ) points / roof segment points / segment dependency corr comp 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 20 40 60 100 150 200 300 500 1000 10000 s u c c e s s r a te ( % ) area [m2] building size dependency corr comp (a) (b) figure 7: dependency of correctness and completeness accuracy measures (a) on the number of points per roof segment and (b) on the size of the building. the quality of building extraction is highly dependent on the number of points per roof segment and on the building size, respectively. thus, further parameters were studied on buildings larger than 100 m2. noise in the dataset can be described as σ0 (standard deviation of the unit weight) resulting from least squares estimation of a best fitting tilted plane in a point neighbourhood. the software package opals was used for calculating σ0 values [19]. 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 0.24 1.00 s u c c e s s r a te ( % ) 0_3d noise dependency (building area > 100 m2) corr comp 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 none adjacent overlapping s u c c e s s r a te ( % ) vegetation vegetation dependency (building area > 100 m2) corr comp (a) (b) figure 8: dependency of correctness and completeness accuracy measures (a) on the noise level and (b) on overlapping vegetation. as expected, the dependence of completeness on σ0 is relatively high (see figure 8a). if a normal vector was assigned to a point with an incorrect height, it would produce larger deviations from the vectors in its surrounding. on the other hand, increasing the threshold in the roof segmentation step would rise the number of false positives and it would decrease the correctness value. figure 8b documents that the results are not strongly influenced by adjacent or overlapping vegetation due to the applied filtering approach using ransac that effectively filters out high portion of outliers. figure 9a shows dependency of the completeness on the complexity of the roof expressed as the number of roof planes/surfaces. no trend was observed in this case which corresponds with the principle of the proposed algorithm. the local, data-driven approach does not consider geoinformatics fce ctu 16(1), 2017 99 p. hofman and m. potůčková: building outline extraction from lidar data any building as one unit in the detection phase and it is not limited with a pre-defined set of models which is the case of the model-driven approach. any roof is considered as a union of an arbitrary number of either planar or curved surfaces. 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1 2 3 4 5 6 7 8 9 10 s u c c e s s r a te ( % ) number of roof segments complexity dependency (building area > 100 m2) corr comp 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 flat inclined gable hipped sphere s u c c e s s r a te ( % ) roof type roof type dependency (building area > 100 m2) corr comp (a) (b) figure 9: dependency of correctness and completeness accuracy measures (a) on building complexity expressed as a number of roof segments and (b) on roof type. the type of the roof does not influence the extraction success rate as well (compare figure 9b). the proposed method can detect traditional gable or (half-)hipped roofs as well as modern flat and shed roofs. due to local determination of similarity between normal vectors, curved surfaces are detected with the same quality. no strong relation was observed between correctness values and the studied parameters. excluding negligible inaccuracies at the edge of the buildings, false positives appeared mostly on vegetation (close and distant), locally comprising clusters behaving as a continuous surface. such surfaces were mostly small and isolated. thus, majority of them was excluded from further calculations. if such cluster appeared in a close proximity of a building, vegetation was considered as another segment of the roof. therefore, correctness values slightly decreased in the case of buildings with adjacent or overlapping vegetation (figure 8b). this problem could be minimised by utilising intensity information or by fusion with colour imagery. 4.3. comparison to related work similar approach for roof plane detection was published in [6]. by means of linear regression the authors fitted local planes to scanned points and determined local roughness and normal vectors. the points on vegetation were filtered out based on roughness values and the building was segmented to roof planes according to normal vectors. it was not possible to use this approach on the test dataset due to much lower point density, 1.5 points/m2 compared to 17 points/m2, and higher percentage of outliers. thus, linear regression was not sufficient in the case of our dataset. moreover, in [6] problem with missing breaklines between roof planes was mentioned; a normal vector corresponding to a point on a breakline does not match normal vectors of any adjacent roof planes. these problems were solved by utilizing the ransac algorithm that is able to eliminate outliers and chooses only one roof surface/plane for points on breaklines. finally, the solution when the roof surface was not planar but generally curved was not included in [6]. geoinformatics fce ctu 16(1), 2017 100 p. hofman and m. potůčková: building outline extraction from lidar data 5. conclusion the proposed methodology for building outline extraction shows promising results. it is fully automatic and based only on geometric attributes of the laser point cloud, i.e. on spatial coordinates of the points. moreover, it is suited for datasets with a lower point density (1.5 points/m2 in the case of our test point clouds). completeness of 97 % and correctness of 87 % was achieved in two test areas comprising rural, industrial, and urban types of buildings. the success rate was similar in the case of all roof types studied regardless of their complexity. the influence of adjacent or overlapping vegetation was low. the major influence on resulting extraction quality was observed for the size of roof faces in relation to the point density. in order to increase the number of detected small buildings, higher point cloud density is required. on the other hand, increasing point density also brings higher level of noise in the laser point cloud [5]. thus, successful practical application of the proposed method requires more tests that would be carried out on datasets with different point densities. acknowledgements the airborne laser scanning data were provided by the czech office for surveying, mapping, and cadastre. references [1] peter axelsson. “dem generation from laser scanner data using adaptive tin models”. in: international archives of photogrammetry and remote sensing 33.b4/1; part 4 (2000), pp. 111–118. [2] liang cheng et al. “building region derivation from lidar data using a reversed iterative mathematic morphological algorithm”. in: optics communications 286 (2013), pp. 244–250. doi: 10.1016/j.optcom.2012.08.028. [3] martin a. fischler and robert c. bolles. “random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography”. in: communications of the acm 24.6 (june 1981), pp. 381–395. doi: 10.1145/358669.358692. [4] petr hofman and markéta potůčková. “roof type determination from a sparse laser scanning point cloud”. in: auc geographica 47.1 (2012), pp. 35–39. [5] alexandra dorothee hofmann. “an approach to 3d building model reconstruction from airborne laser scanner data using parameter space analysis and fusion of primitives rimitives”. phd thesis. technische universität dresden fakultät forstgeound hydrowissenschaften, institut für photogrammetrie und fernerkundung, jan. 19, 2005. [6] andreas jochem et al. “automatic roof plane detection and analysis in airborne lidar point clouds for solar potential assessment”. in: sensors 9.7 (2009), pp. 5241–5262. doi: 10.3390/s90705241. [7] karl kraus and norbert pfeifer. “determination of terrain models in wooded areas with airborne laser scanner data”. in: isprs journal of photogrammetry and remote sensing 53.4 (1998), pp. 193–203. geoinformatics fce ctu 16(1), 2017 101 https://doi.org/10.1016/j.optcom.2012.08.028 https://doi.org/10.1145/358669.358692 https://doi.org/10.3390/s90705241 p. hofman and m. potůčková: building outline extraction from lidar data [8] stephen r lach and john p kerekes. “robust extraction of exterior building boundaries from topographic lidar data”. in: geoscience and remote sensing symposium, 2008. igarss 2008. ieee international. vol. 2. ieee. 2008, pp. ii–85. [9] zahra lari and ayman habib. “an adaptive approach for the segmentation and extraction of planar and linear/cylindrical features from laser scanning data”. in: isprs journal of photogrammetry and remote sensing 93 (2014), pp. 192–212. doi: 10.1016/j.isprsjprs.2013.12.001. [10] markéta potůčková and petr hofman. “comparison of quality measures for building outline extraction”. in: the photogrammetric record 31.154 (2016), pp. 193–209. doi: 10.1111/phor.12144. [11] rapidlasso gmbh. lastools. aug. 1, 2017. url: https://rapidlasso.com/lastools/. [12] riegl laser measurement systems. riegl lms-q680i. aug. 1, 2017. url: http:// www.riegl.com/products/airborne-scanning/produktdetail/product/scanner/ 23/ (visited on 08/01/2017). [13] martin rutzinger, franz rottensteiner, and norbert pfeifer. “a comparison of evaluation techniques for building extraction from airborne laser scanning”. in: ieee journal of selected topics in applied earth observations and remote sensing 2.1 (2009), pp. 11–20. doi: 10.1109/jstars.2009.2012488. [14] ellen schwalbe, hans-gerd maas, and frank seidel. “3d building model generation from airborne laser scanner data using 2d gis data and orthogonal point cloud projections”. in: proceedings of isprs wg iii/3, iii/4 3 (2005), pp. 12–14. [15] terrasolid ltd. terrascan. aug. 1, 2017. url: http://www.terrasolid.com/products/ terrascanpage.php (visited on 08/01/2017). [16] ivan tomljenovic and adam rousell. “influence of point cloud density on the results of automated object-based building extraction from als data”. in: agile conference castellon, spain. (castellon, spain). agile digital editions, june 2014. [17] ivan tomljenovic et al. “building extraction from airborne laser scanning data: an analysis of the state of the art”. in: remote sensing 7.4 (mar. 2015), pp. 3826–3862. doi: 10.3390/rs70403826. [18] trimble inc. trimble e-cognition. aug. 1, 2017. url: http://www.ecognition.com/ suite/ecognition-developer (visited on 08/01/2017). [19] vienna university of technology. opals orientation and processing of airborne laser scanning data. aug. 1, 2017. url: http://geo.tuwien.ac.at/opals/html/ index.html. [20] vienna university of technology. scop++. aug. 1, 2017. url: http://photo.geo. tuwien.ac.at/software/scop/ (visited on 08/01/2017). [21] shen wei. “building boundary extraction based on lidar point clouds data”. in: proceedings of the international archives of the photogrammetry, remote sensing and spatial information sciences 37 (2008), pp. 157–161. geoinformatics fce ctu 16(1), 2017 102 https://doi.org/10.1016/j.isprsjprs.2013.12.001 https://doi.org/10.1111/phor.12144 https://rapidlasso.com/lastools/ http://www.riegl.com/products/airborne-scanning/produktdetail/product/scanner/23/ http://www.riegl.com/products/airborne-scanning/produktdetail/product/scanner/23/ http://www.riegl.com/products/airborne-scanning/produktdetail/product/scanner/23/ https://doi.org/10.1109/jstars.2009.2012488 http://www.terrasolid.com/products/terrascanpage.php http://www.terrasolid.com/products/terrascanpage.php https://doi.org/10.3390/rs70403826 http://www.ecognition.com/suite/ecognition-developer http://www.ecognition.com/suite/ecognition-developer http://geo.tuwien.ac.at/opals/html/index.html http://geo.tuwien.ac.at/opals/html/index.html http://photo.geo.tuwien.ac.at/software/scop/ http://photo.geo.tuwien.ac.at/software/scop/ p. hofman and m. potůčková: building outline extraction from lidar data introduction data and test sites methodology pre-processing object detection plane fitting roof face segmentation outline extraction and regularization evaluation approach results and discussion overall quality quality in relation to building and point cloud parameters comparison to related work conclusion bim, gis and semantic models of cultural heritage buildings bim, gis and semantic models of cultural heritage buildings pavel tobiáš department of geomatics, faculty of civil engineering czech technical university in prague thákurova 7, 166 29 prague 6, czech republic pavel.tobias@fsv.cvut.cz abstract even though there has been a great development in using building information models in the aec (architecture/engineering/construction) sector recently, creation of models of existing buildings is not very common yet. the cultural heritage documentation, in most cases, is still kept in the form of 2d drawings containing only geometry without semantics, attributes or definitions of the relationships and hierarchies between particular building elements. this paper is based on the existing literature and focuses on the historic building information modelling to provide information about the current state of the art. first, a summary of available software is introduced, while not only bim tools but also related gis software is considered. this is followed by a review of existing efforts worldwide, while the efforts found are separated into two categories, considering their main focus (3d modelling or resulting data management). the last part of this article is dedicated to the summary of the facts found in the preceding review. the requirements on a resulting information model and the selection of suitable software are discussed and the abilities of bim and gis tools are compared. keywords: bim; historic building information modelling; gis; 3d model; cultural heritage. introduction if we want to perform the tasks related to the administration and maintenance of cultural heritage buildings, we urgently need comprehensive information about the objects of interest. to facilitate this, a large amount of data from various sources and in diverse file formats is to be brought together. then, an integrated information system, which covers all physical and functional characteristics of a building, can be created. indeed, the required data can be highly heterogeneous – we are talking about textual and graphical historical documents, plans, maps but also about up-to-date data from structural-historical investigations, geodetic surveys or photographic reconnaissance. considering that all architectural heritage objects inherently have three-dimensional spatial characteristics, the resulting information system, which will comprise all the mentioned documents, should allow the management of 3d models. even that might not be sufficient because we often need a 4d representation of a historic building to describe its changes in time. today cultural heritage documentation, in most cases, is still kept in the form of 2d drawings either digitally or on paper. these drawings often contain only geometric elements without defining semantics or the relationships between particular objects. however, for the purpose geoinformatics fce ctu 15(2), 2016, doi:10.14311/gi.15.2.3 27 http://orcid.org/0000-0003-0382-0410 https://doi.org/10.14311/gi.15.2.3 http://creativecommons.org/licenses/by/4.0/ p. tobiáš: bim, gis and semantic models of cultural heritage buildings of facility management and planning reconstructions, the ability to browse a building in a virtual 3d environment and to perform spatial and multi-criteria queries would be convenient. therefore, it is necessary to know the structure of the building, i.e. the interrelationships between architectural elements, integrate heterogeneous data sources – enrich the elements and thus create a semantic building model. the creation of semantic building information models (bim) is currently developing mainly in the field of the design and construction of new buildings (as-designed bim). nevertheless, with the use of modern data acquisition methods, such as laser scanning and digital photogrammetry, bim tools can also be used to develop models of already existing buildings (as-built bim) [22]. besides classical bim software which is used in the aec (architecture/engineering/construction) sector, we could also use geographic information systems (gis) tools in the bim process. although gis tools were originally developed to represent larger areas in 2d, they provide sophisticated methods of database storage, relationships definition and attribute and spatial queries creation, which also makes them a suitable tool for the management of information about historic buildings. moreover, the methods of 3d editing and representation in gis have also been further developed. this article investigates, with the use of available literature, the field of historic building information modelling and focuses on the comparison of bim and gis tools. first, the existing software for information modelling is summarized. then, a review of recent efforts is introduced. last, a discussion that sums up the acquired knowledge is presented. it is necessary to mention that this article expects basic knowledge in the field of bim. otherwise see e.g. [24] for more information. existing software tools building information modelling is a long-term process which should, in the ideal case, describe a building during its whole life cycle. numerous stakeholders – experts from various fields participate in the creation of the resulting model. thus, it is not surprising that there is no single bim application and the bim process is rather based on data exchange between particular professionals while each of them uses their dedicated software tool. the summary of the most important software tools available is in table 1. according to [16], the current bim software can be separated into three categories: 1. tools for the design of 3d models (3d modellers) 2. applications for the viewing and inspection of models 3. analytical software besides the software tools presented in the table, which are more suitable for design and construction, there exists a group of programs utilisable during the longest stage of the building lifecycle, i.e. during its operation and maintenance. such tools can also be used to manage information about already existing (or even historic) buildings. here, we are speaking about facility management software, such as archibus or graphisoft archifm and, last but not least, about gis tools, e.g. esri arcgis. it is apparent from the table that the leading software companies offer tools which cover most functions needed during the bim process. these software solutions are then designated for geoinformatics fce ctu 15(2), 2016 28 p. tobiáš: bim, gis and semantic models of cultural heritage buildings commercial use and are, of course, paid. the open source software edificus free upp is one of the few free of charge bim tools. however, its users have to pay for printing projects and for several tasks third-party applications, which are not free, have to be used, e.g. trimble sketchup for 3d modelling. thus, only bim viewers with limited functionality, such as autodesk navisworks freedom, can be considered as truly free bim tools [16]. in the following section, a review of the most important efforts that deal with the information modelling of historic buildings will be introduced. it is worth mentioning that nine out of 16 efforts use the autodesk revit software for 3d modelling. this is in accordance with the studies described in the article by david m. foxe [12] claiming that revit has a 67% market share followed by the products by bentley and graphisoft. recent efforts in table 2, there is a list of efforts which use bim or gis to create information models of cultural heritage buildings. the basis for this enumeration was found in the article by saygi and remondino [22] and it was modified and further extended with other works found. although this list is definitely not complete, it allows studying the approaches and tools used quite well. the efforts on the list can be separated into categories considering the chosen approach. the majority of the efforts are rather focused on the use of bim software. in these cases a library of parametric objects is usually developed. such a library is usable for the conversion of an unstructured point cloud, generated by laser scanning, into the form of a parametric 3d model. the second group of efforts uses a combination of bim and gis tools while bim is usually used to create 3d models and gis to manage the resulting information. the last category can be described as the gis approach because no classical bim software is used. however, the workflow is very similar to the combined bim/gis way. in the two following sections we will describe particular efforts in greater detail. the efforts focused on 3d modelling the creation of a 3d model must be preceded by data collection. currently laser scanning and digital photogrammetry are understood as modern methods of data acquisition. the result of laser scanning is a dense point cloud. although this point cloud can be used for some preservation purposes, it is hardly a full-fledged 3d model. therefore, it is no surprise that there are a lot of efforts dealing with the conversion of the acquired data from the point cloud into a parametric 3d model. for example, the work by fai et al. [11] is focused on problems bound with combining laser scanning data and 3d models from bim modelling software. however, this work uses generic object libraries which are not adapted for the specific needs of historic buildings. the creation of a prototype library designed specifically for the needs of the cultural heritage preservation was first described in the article by murphy et al. [19] while it was named historic building information modelling (hbim). this method was further developed in the paper [18]. the hbim process begins with the creation of data sets using terestrial laser scanning and photogrammetry. the next stage involves the design and development of a particular geoinformatics fce ctu 15(2), 2016 29 p. tobiáš: bim, gis and semantic models of cultural heritage buildings table 1: commercial and open-source bim tools (modified from [16]) product name manufacturer bim use primary function revit architecture autodesk creating and reviewing 3d models architectural modelling and parametric design bentley architecture bentley systems creating and reviewing 3d models architectural modelling sketchup pro trimble conceptual 3d modelling conceptual design modelling archicad graphisoft conceptual 3d architectural model architectural model creation teklastructures tekla conceptual 3d modelling architectural 3d model application dprofiler beck technology conceptual design and cost estimation 3d conceptual modelling with real-time cost estimation vectorworks designer nemetschek conceptual 3d modelling architectural model creation affinity trelligence conceptual 3d modelling early concept design edificus accasoftware architectural bim design and 3d object cad architectural modelling vico office vico software conceptual 5d modelling 5d conceptual model, cost and schedule data revit structure autodesk structural structural modelling and parametric design sds/2 design data structural 3d structural modelling and detailing risa risa technologies structural full suite of structural design applications robot autodesk structural analysis bi-directional link with revit structure green building studio autodesk energy analysis energy use and carbon footprint calculation structural analysis, design detailing, building performance bentley systems structural analysis, detailing, quantity take-off, building performance measures, assess and reports building performance solibri model checker solibri model checking and validation rule based checking for compliance and validation of all objects in model tekla bimsight tekla model viewer models combination, clashes checking navisworks manage/simulate autodesk model checking and validation clashes checking, 4d simulations of construction progress xbimxplorer open bim ifc viewer ifc files opening and viewing solibri model viewer solibri model viewer ifc files opening and viewing navisworks freedom autodesk model viewer ifc files opening and viewing library of parametric objects. to fulfill this task the software platform graphisoft archicad and the open-source scripting language gdl (geometric description language), which is implemented in archicad, were used. after the creation of the library, a semi-automatic process of mapping parametric objects into point clouds facilitates the 3d model creation. furthermore, oreni et al. [20], appolonio et al. [2] or brumana et al. [6] in their articles also deal with the creation of historic building-specific libraries. all the mentioned accepted geoinformatics fce ctu 15(2), 2016 30 p. tobiáš: bim, gis and semantic models of cultural heritage buildings table 2: current efforts dealing with the modelling of historic buildings (extended from [22]) approach reference paper(s) applied case software notes bim garagnani (2012) [13] an early byzantine church in ravenna autodesk revit architecture, greenspider plugin a plugin facilitating segmentation of unstructured point clouds bim attar et al. (2010) [3] historic warehouses converted into offices in toronto autodesk revit, autocad, pluginy gbxml, energyplus evaluation/analysis of building performance and energetic efficiency bim achille et al. (2012) [1] the main spire of the milan cathedral rhinoceros, webgl, back office, front office, plugin pointools the model used as a repository containing photographic catalogue. ability of sharing on the web bim oreni et al. (2013) [20] various types of historic vaults leica cloudwork, autodesk revit, autocad, rhinoceros parametric models of vaults bim apollonio et al. (2012) [2] palladian architecture – doric order autodesk revit libraries of parametric objects based on classical architectural literature bim boeykens et al. (2012) [5] the prague vinohrady synagogue graphisoft archicad, maxon, cinema4d reconstruction of a no longer existing synagogue with the use of bim bim fai et al. (2011) [11] historic factory areal in toronto autocad, civil 3d, sketchup, revit, navisworks 4d modelling bim foxe (2010) [12] historic buildings in boston and durham ? the resulting information models used as a basis for reconstruction. bim brumana et al. (2013) [6] a church in scaria d’intelvi, italy autodesk revit, autodesk green building studio different construction phases captured in the bim model (stratigraphy). the model further used for energetic efficiency analyses. bim baik et al. (2014) [4] historic buildings in jeddah, saudi arabia photomodeler scanner, autodesk recap 360, rhinoceros, autodesk revit library of parametric objects (jhbim) bim/gis yajing and cong (2011) [26] the stone heritage of kao temple autodesk revit and 3ds max, trimble sketchup, geomagic, autocad revit families for heritage buildings of stone a gis and bim connection for management purposes planned. bim/gis san josé-alonso et al. (2009) [21] various heritage object in spain pinta original sofware platform pinta combining bim and gis functionality bim/gis dore and murphy (2012) [9], murphy et al. (2013) [18] henrietta street in dublin graphisoft archicad, sketchup + citygml plugin, arcgis comprehensive workflow of 3d model creation based on laser scanning and a library of parametric objects, export into the gis environment for the purpose of data management. bimxgis saygi and remondino (2013) [22], saygi et al. (2013) [23] kurşunlu khan in turkey autodesk revit architecture + revit db link, autocad, sketchup, 3ds max, arcgis, postgis comparison between the bim and the gis approach gis centofanti et al. (2011) [7] villa and churches in italy autocad, 3ds max, rhinoceros, rapidform xor, microsoft access, arcgis 3d models in the gis environment, heritage information management and analyses gis jedlička et al. (2013) [15] the castle kozel kokeš, riscan, microstation, msr, sketchup, arcgis, city engine comprehensive workflow from data acquisition to import into gis for further analyses geoinformatics fce ctu 15(2), 2016 31 p. tobiáš: bim, gis and semantic models of cultural heritage buildings figure 1: examples of parametric objects from the hbim library [18] the term hbim. however, it is not clear whether it only describes parametric libraries or the whole modelling process in general. baik et al. [4], whose paper describes a library specifically designed for the modelling of the middle east architecture, use the localised term jhbim (jeddah historical building information modelling). also the work by yajing and cong [26] could belong to this group. even though they do not use the hbim term, they similarly create their own object library to model stone heritage buildings. last but not least, garagnani [13] deals with the description of his plugin greenspider, which was developed to facilitate the processing of unstructured point clouds. all the efforts mentioned in this article use, compared to murphy et al., the bim tool autodesk revit. not all the efforts use laser scanning data as a basis for 3d modelling. saygi et al. [22, 23] compare the bim and the gis approach to 3d modelling and data management and use archival drawings as their base. particular elements are then created manually in a suitable 3d modelling software. autodesk revit was used to try out the bim approach and a combination of tools (autocad, trimble sketchup and autodesk 3ds) to examine the possibilities of the gis processing. also boeykens et al. [5] utilise existing documentation in the form of drawings. laser scanning as a data acquisition method is in their case, of course, out of the question because they model an already non-existent building. the efforts focused on data management even though the research by saygi et al. [22, 23] was already mentioned in the previous section because it also deals with 3d modelling, its main topic is the analysis of heritage data storage and management. their articles evaluate the workflows of semantic model creation and storage in detail. the necessity of the 3d model segmentation into particular architectural elements is emphasized because information about each building component can only be stored this way. the possibilities of 3d geometry management and additional information storage geoinformatics fce ctu 15(2), 2016 32 p. tobiáš: bim, gis and semantic models of cultural heritage buildings figure 2: creation and visualisation of a 3d model based on a point cloud [4] are evaluated and gis software (arcgis tested) is identified as a currently more suitable tool because of its capabilities of non-homogeneous data aggregation and non-spatial information integration. the aforementioned is also acknowledged in the article by dore and murphy [9], in which the integration of a 3d model, resulting from the hbim process, into the gis environment is described. the resulting model should serve as a basis for information management and interconnection with other data sources including external sources, i.e. other information systems. the trimble sketchup application with an appropriate citygml plugin facilitates here the conversion between the bim software archicad and the gis tool arcgis. a very similar approach is used in the work by jedlička et al. [15] though they do not use any actual bim software. the same tool, i.e. arcgis, is also utilized in the work by centofanti et al. [7] in order to create an architectural information system specifically designed for the purposes of cultural heritage management and maintenance. this work also states that current bim software is still immature for the tasks of heritage preservation. thus, gis tools are preferred at least for data management. a similar information system is presented in the paper by san josé-alonso et al. [21] where it is named the cultural heritage information system (chis). since the authors had found the currently available software tools insufficient, a new platform called pinta (processing information system for architecture) was developed. the pinta software enables creating a model from laser scanning and digital photogrammetry data, store it and automatically generate drawings and cut sections. the most interesting is then the emphasis on the remote access of users from the ranks of the government and general public (which is aligned with the spirit of bim cooperation). despite the papers mentioned above that prefer the use of gis for data management, there exist several other works employing exclusively bim tools. the articles by foxe [12] or geoinformatics fce ctu 15(2), 2016 33 p. tobiáš: bim, gis and semantic models of cultural heritage buildings figure 3: 3d model of a heritage bulding in the gis environment [7] attar et al. [3] can be such examples while they describe efforts utilizing bim for planning reconstructions or energy efficiency analyses. nevertheless, it should be noted that both papers are focused on heritage buildings in north america which are significantly newer than is usual in europe. therefore, they are structurally much closer to modern buildings which bim was designed for. a similar situation is depicted in the contribution by fai et al. [11]. here, the bim software is used to plan the reconstruction of a defunct factory and shows its strength in 4d modelling, i.e. the ability to capture changes over time. finally, brumana et al. [6] also utilise bim to visualise different constructive phases in 3d (stratigraphy – see fig. 4). discussion this section will summarize the most important questions which come out during the historic building information modelling process and will be based on the aforementioned literature. first, it is necessary to realise that bim workflows, currently relatively well developed in the aec sector, are in most cases focused on the design and construction of new buildings. the requirements on the creation of existing building models may be significantly different. speaking about historic cultural heritage buildings the situation is even more difficult because such buildings contain a lot of irregular architectural elements which can be damaged or wornout. the requirements on data management are also very high because the resulting 3d model has to comprise a large amount of heterogeneous data sources. the requirements on an information model the resulting 3d model should contain semantics, characteristics of the object structure and relationships between particular architectural elements [22]. therefore, during the modelling geoinformatics fce ctu 15(2), 2016 34 p. tobiáš: bim, gis and semantic models of cultural heritage buildings figure 4: architectural elements differentiated by particular constructive phases [6] stage capturing only visible surfaces is not sufficient and we also have to model the “detail behind the object’s surfac”, i.e. concern methods of construction and materials of architectural elements [18]. on the other hand, it is necessary to consider what level of detail is suitable for our needs. foxe in his article [12] reminds that in the case of an existing building its model is always to a certain extent different from the actual object. there is always a certain level of simplification and abstraction and too much detail can also be inappropriate considering the increasing amounts of data which must be processed during further work with the model. moreover, it is worth mentioning that the level of detail does not only apply to geometry but it is also related to the accuracy of attributes – descriptive information. in this context, the term level of development is more suitable. this term was first used by the american institute of architects (aia) in 2008 and expresses the model detail with 5 values (100 – 500) [8]. in figure 5, there is a slightly simplified depiction of lod by foxe. although the levels of development were originally designed for newly built buildings, they can also be applied to geoinformatics fce ctu 15(2), 2016 35 p. tobiáš: bim, gis and semantic models of cultural heritage buildings historic buildings especially if their reconstruction is planned. what lod we want to achieve should be carefully thought out so that the model structure allows further addition of details. figure 5: the levels of bim according to [12] 3d geometric models which depict current as-found physical characteristics of heritage buildings can be considered as the primary result for the needs of cultural heritage preservation. however, descriptive information must also be integrated to meet the requirements of bim. in the result, the following data will be included [7, 17]: 1. building location and identification – coordinates in the national coordinate system, cadastral information (number of the building, land parcel number, owner, mode of building using. . . ) 2. historical documents • textual documents – historical review – history of the building, archival sources, chronicles, transcriptions. . . • raster data – old maps, plans, archival photographs and other image material 3. architectural analysis of the building – used material, construction systems, information about particular building elements, construction history, differentiation of structures according construction stages, art-historical and aesthetic evaluation of parts of the building 4. information about the condition of the building – closely related to the previous item • textual documents from surveys – used materials, construction techniques, condition, damage and structural problems • raster data from surveys – drawings, photographs, photoplans, maps • vector data from surveys – documentation created during the geodetic survey (site plans, elevations, sections, floor plans. . . ) 5. information about reconstructions, maintenance and other interventions – in raster or vector formats geoinformatics fce ctu 15(2), 2016 36 p. tobiáš: bim, gis and semantic models of cultural heritage buildings it is clear from the list above that the results of the structural-historical investigation [17] are a crucial data source for the creation of hbim in the czech republic. the resulting models of historic buildings will be an integral part of an information system. the required functionality of this information system can be summarized as follows [22, 18, 7]: 1. the ability to define the interrelationships and hierarchies between the objects of the model 2. management of descriptive information – attribute data which the model is enhanced by 3. 4d representation – the ability to represent temporal data 4. tools for 3d editing 5. 2d and 3d visualisation at suitable scales 6. the capability to browse attribute data, photographs and other documents 7. the interface for asking attribute, spatial and multi-criteria queries 8. automatic export into the form of documentation suitable for planning of reconstructions and historical studies the selection of suitable tools to be able to create and manage an information model, the right choice of software tools is crucial. unfortunately, today there is no comprehensive solution specifically designed to model and manage semantically enhanced 3d models of historic buildings [22]. the currently available approaches to spatial information management, i.e. bim and gis, have both their pros and cons (see fig. 3), therefore, we cannot definitely prioritize one of them. thus, meanwhile it will be necessary to employ both solutions, combine suitable software tools and utilise their advantages. table 3: the comparison of bim and gis capabilities [22] criteria for information management process bim gis definition of specified mutual and hierarchical relationships x x enhanced attribute management x x 3d editing functionalities x x spatial and multi-criteria query-able characteristics x x representation of multi-layered conceptual themes in 3d x x temporal (4d) representations x x the most important advantage of bim is the ability to create 3d models using intelligent parametric elements. on the other hand, existing libraries of parametric objects are generally not suitable for the 3d reconstruction of heritage buildings because in such buildings even objects of the same type (walls, columns. . . ) can be highly different in shape because of missing industrialisation and prefabrication in the past. furthermore, the definition of the heritage preservation-specific attributes, integration of non-homogeneous datasets and possibilities of geoinformatics fce ctu 15(2), 2016 37 p. tobiáš: bim, gis and semantic models of cultural heritage buildings asking spatial queries are limited in bim [22, 23, 18]. nevertheless, despite these drawbacks, bim remains a very powerful tool for the modelling phase. the design of new libraries which facilitate conversion from unstructured point clouds into the form of volumetric 3d models is then a very frequent topic of scientific works [20, 2, 6, 4, 26, 18, 19]. gis, on the other hand, has been primarily designed to manage and query spatial information. in the gis environment we can easily work with semantically enriched objects, non-geometric attributes can be linked with geometry and managed in relational databases. spatial and attribute queries can be asked. however, the possibilities of 3d editing are still rather limited. thus, it is no surprise that the most efficient workflow remains the aforementioned combination of bim tools for 3d modelling and gis tools for data management [22, 23, 26, 21, 9, 15]. in the context of bim the term of parametric modelling is often mentioned while it is necessary to realise that there are more approaches to parametric reconstruction of buildings. what approach is suitable for our needs depends mostly on our input data. classic bim software is an advanced 3d cad tool which utilises intelligent parametric objects. these objects represent all physical and functional properties of real-world architectural elements. in addition, interrelationships between the elements can be defined. if we can obtain building documentation in the form of 2d drawings, we can create a virtual 3d model manually from the elements (see e.g. [22]). a slightly different type of parametric modelling can be used to process the laser scanning or digital-photogrammetry data. if we have a parametric library prepared, we can automate the process of mapping vector data onto point clouds. the library can be created based on historical architectural books and it contains a textual description of particular elements in a text file, e.g. with the use of the gdl language. the architectural elements are then semiautomatically identified in point clouds and a discrete model can be replaced with continuous geometric primitives [18]. lastly, the procedural modelling of buildings with the use of shape grammars can be understood as a type of parametric modelling. this approach is similar to the previous mentioned because it employs a textual, human-readable description of architecture. the creation of models is based on different architectural styles. buildings are divided into parts and represented by a set of basic shapes. these shapes are controlled by replacement rules while each shape can be replaced with more detailed shapes or it can be changed by a transformation. although procedural modelling was developed to create models of larger urban areas and for visualisation purposes [14, 25], it might also be a suitable method for 3d reconstructions of single buildings if the 2d drawings are available [10]. furthermore, the focus on semantics and defining objects hierarchies is very similar to bim. conclusion the goal of this review was to summarize recent efforts dealing with the information modelling of cultural heritage buildings and to compare the abilities of bim and gis for this purpose. today, there is no comprehensive software solution designed specifically for creating and managing information about historic buildings, i.e. for the whole workflow of processing measured data and other data sources, 3d modelling, management of the resulting model, analyses and visualisations. this is apparent from the review because most of the existing geoinformatics fce ctu 15(2), 2016 38 p. tobiáš: bim, gis and semantic models of cultural heritage buildings works utilise a combination of several bim and gis software tools. the existing efforts can be divided in two parts according to the approach used. bim tools are currently employed mainly for 3d modelling based on laser-scanning and photogrammetry data. the design of parametric libraries and conversion from an unstructured point cloud into a continuous model are very important topics here. gis, on the other hand, is used mainly for the management of the resulting models and establishing connections with descriptive attributes or even other information systems. however, the question of transformation from bim into the gis environment seems to be still not fully resolved. acknowledgements this work was supported by the grant agency of the czech technical university in prague, grant no. sgs16/063/ohk1/1t/11 “innovative approaches in the field of geomatics: data collection, processing and analyses”. references [1] c. achille, f. fassi, and l. fregonese. “4 years history: from 2d to bim for ch: the main spire on milan cathedral”. in: 2012 18th international conference on virtual systems and multimedia (vsmm). sept. 2012, pp. 377–382. doi: 10.1109/vsmm.2012. 6365948. [2] fabrizio ivan apollonio, marco gaiani, and zheng sun. “bim-based modeling and data enrichment of classical architectural buildings”. italian. in: scires-it 2.2 (dec. 2012), pp. 41–62. issn: 2239-4303. doi: 10.2423/i22394303v2n2p41. [3] ramtin attar et al. “210 king street: a dataset for integrated performance assessment”. in: proceedings of the 2010 spring simulation multiconference. springsim ’10. san diego, ca, usa: society for computer simulation international, 2010, 177:1–177:4. isbn: 978-1-4503-0069-8. doi: 10.1145/1878537.1878722. [4] a. baik et al. “jeddah historical building information modelling "jhbim" -object library”. in: isprs annals of photogrammetry, remote sensing and spatial information sciences ii-5 (may 2014), pp. 41–47. issn: 2194-9050. doi: 10.5194/isprsannals-ii5-41-2014. [5] stefan boeykens, caroline himpe, and bob martens. “a case study of using bim in historical reconstruction. the vinohrady synagogue in prague”. in: digital physicality | physical digitality. ecaade and cvut, faculty of architecture, sept. 2012, pp. 729–738. isbn: 978-9-4912070-3-7. url: https://lirias.kuleuven.be/handle/ 123456789/350340 (visited on 02/02/2016). [6] raffaella brumana et al. “from survey to hbim for documentation, dissemination and management of built heritage: the case study of st. maria in scaria d’intelvi”. english. in: the institute of electrical and electronics engineers, inc. (ieee) conference proceedings. vol. 1. piscataway: the institute of electrical and electronics engineers, inc. (ieee), oct. 2013, p. 497. url: http://search.proquest.com/docview/1565886446 (visited on 02/02/2016). geoinformatics fce ctu 15(2), 2016 39 https://doi.org/10.1109/vsmm.2012.6365948 https://doi.org/10.1109/vsmm.2012.6365948 https://doi.org/10.2423/i22394303v2n2p41 https://doi.org/10.1145/1878537.1878722 https://doi.org/10.5194/isprsannals-ii-5-41-2014 https://doi.org/10.5194/isprsannals-ii-5-41-2014 https://lirias.kuleuven.be/handle/123456789/350340 https://lirias.kuleuven.be/handle/123456789/350340 http://search.proquest.com/docview/1565886446 p. tobiáš: bim, gis and semantic models of cultural heritage buildings [7] m. centofanti et al. “the architectural information system siarch3d-univaq for analysis and preservation of architectural heritage”. english. in: international archives of the photogrammetry, remote sensing and spatial information sciences isprs archives 38.5 (sept. 2011), pp. 9–14. issn: 1682-1750. [8] martin černý. bim příručka. praha: odborná rada pro bim o.s., 2013. isbn: 978-80260-5296-8. [9] c. dore and m. murphy. “integration of historic building information modeling (hbim) and 3d gis for recording and managing cultural heritage sites”. in: ieee, sept. 2012, pp. 369–376. isbn: 978-1-4673-2564-6, 978-1-4673-2563-9. doi: 10.1109/vsmm.2012. 6365947. [10] kristinn nikulás edvardsson. “3d gis modeling using esri´s cityengine, a case study from the university jaume i in castellon de la plana spain”. phd thesis. university jaume i in castellon de la plana spain, 2013. [11] stephen fai et al. “building information modeling and heritage documentation”. in: xxiii cipa international symposium, prague, czech republic, 12th16th september. 2011. [12] david m. foxe. “building information modeling for constructing the past and its future”. english. in: apt bulletin 41.4 (jan. 2010), pp. 39–45. issn: 0848-8525. url: http://www.jstor.org/stable/41000037 (visited on 02/02/2016). [13] simone garagnani. “semantic building information modeling and high definition surveys for cultural heritage sites”. in: disegnarecon special issue.doco 2012 (2012), pp. 297–302. [14] simon haegler, pascal müller, and luc van gool. “procedural modeling for digital cultural heritage”. in: eurasip journal on image and video processing 2009.1 (dec. 2009). issn: 1687-5281. doi: 10.1155/2009/852392. [15] karel jedlička, otakar čerba, and pavel hájek. “creation of information-rich 3d model in geographic information system case study at the castle kozel”. in: 13th sgem geoconference on informatics, geoinformatics and remote sensing. june 2013. doi: 10.5593/sgem2013/bb2.v1/s11.010. [16] s. logothetis, a. delinasiou, and e. stylianidis. “building information modelling for cultural heritage: a review”. english. in: isprs annals of the photogrammetry, remote sensing and spatial information sciences ii.5 (jan. 2015), pp. 177–183. issn: 2194-9042. doi: 10.5194/isprsannals-ii-5-w3-177-2015. [17] petr macek. standardní nedestruktivní stavebně-historický průzkum. 2., doplněné vyd. příloha časopisu zprávy památkové péče, roč. 61. – vydavatel: státní ústav památkové péče v praze. praha: nakladatelství jalna, 2001. isbn: 80–86234–22–3. url: http:// pamatky-facvut.cz/download/dokumenty/standardni.pdf (visited on 05/13/2015). [18] maurice murphy, eugene mcgovern, and sara pavia. “historic building information modelling – adding intelligence to laser and image based surveys of european classical architecture”. in: isprs journal of photogrammetry and remote sensing. terrestrial 3d modelling 76 (feb. 2013), pp. 89–102. issn: 0924-2716. doi: 10.1016/j.isprsjprs. 2012.11.006. geoinformatics fce ctu 15(2), 2016 40 https://doi.org/10.1109/vsmm.2012.6365947 https://doi.org/10.1109/vsmm.2012.6365947 http://www.jstor.org/stable/41000037 https://doi.org/10.1155/2009/852392 https://doi.org/10.5593/sgem2013/bb2.v1/s11.010 https://doi.org/10.5194/isprsannals-ii-5-w3-177-2015 http://pamatky-facvut.cz/download/dokumenty/standardni.pdf http://pamatky-facvut.cz/download/dokumenty/standardni.pdf https://doi.org/10.1016/j.isprsjprs.2012.11.006 https://doi.org/10.1016/j.isprsjprs.2012.11.006 p. tobiáš: bim, gis and semantic models of cultural heritage buildings [19] maurice murphy, eugene mcgovern, and sara pavia. “historic building information modelling (hbim)”. english. in: structural survey 27.4 (aug. 2009), pp. 311–327. issn: 0263-080x. doi: 10.1108/02630800910985108. [20] d. oreni et al. “hbim for conservation and management of built heritage: towards a library of vaults and wooden bean floors”. english. in: isprs annals of photogrammetry, remote sensing and spatial information sciences ii-5/w1 (july 2013), pp. 215– 221. issn: 2194-9050. doi: 10.5194/isprsannals-ii-5-w1-215-2013. [21] j. i. san josé-alonso et al. “information and knowledge systems for integrated models in cultural heritage”. in: proceedings of the 3rd isprs international workshop 3d-arch 2009. vol. 38. citeseer, 2009, p. 5. url: http://citeseerx.ist.psu.edu/viewdoc/ download?doi=10.1.1.445.2498&rep=rep1&type=pdf (visited on 02/02/2016). [22] g. saygi and f. remondino. “management of architectural heritage information in bim and gis: state-of-the-art and future perspectives”. english. in: international journal of heritage in the digital era 2.4 (dec. 2013), pp. 695–714. issn: 2047-4970. doi: 10.1260/2047-4970.2.4.695. [23] g. saygi et al. “evaluation of gis and bim roles for the information management of historical buildings”. english. in: isprs annals of photogrammetry, remote sensing and spatial information sciences ii-5/w1 (july 2013), pp. 283–288. issn: 2194-9050. doi: 10.5194/isprsannals-ii-5-w1-283-2013. [24] pavel tobiáš. “an investigation into the possibilities of bim and gis cooperation and utilization of gis in the bim process”. in: geoinformatics fce ctu 14.1 (june 2015), pp. 65–78. doi: 10.14311/gi.14.1.5. [25] b. watson et al. “procedural urban modeling in practice”. in: ieee computer graphics and applications 28.3 (may 2008), pp. 18–26. issn: 0272-1716. doi: 10.1109/mcg. 2008.58. [26] di yajing and wu cong. “research on the building information model of the stone building for heritages conservation with the outer south gate of the ta keo temple as an example”. in: 2011 international conference on electric technology and civil engineering (icetce). apr. 2011, pp. 1488–1491. doi: 10.1109/icetce.2011.5776479. geoinformatics fce ctu 15(2), 2016 41 https://doi.org/10.1108/02630800910985108 https://doi.org/10.5194/isprsannals-ii-5-w1-215-2013 http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.445.2498&rep=rep1&type=pdf http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.445.2498&rep=rep1&type=pdf https://doi.org/10.1260/2047-4970.2.4.695 https://doi.org/10.5194/isprsannals-ii-5-w1-283-2013 https://doi.org/10.14311/gi.14.1.5 https://doi.org/10.1109/mcg.2008.58 https://doi.org/10.1109/mcg.2008.58 https://doi.org/10.1109/icetce.2011.5776479 geoinformatics fce ctu 15(2), 2016 42 p. tobiáš: bim, gis and semantic models of cultural heritage buildings efficient plotting the functions with discontinuities based on combined sampling efficient plotting the functions with discontinuities based on combined sampling tomáš bayer department of applied geoinformatics and cartography, faculty of science, charles university, czech republic bayertom@natur.cuni.cz abstract. this article presents a new algorithm for interval plotting of the function y = f(x) based on combined sampling. the proposed method synthesizes the uniform and adaptive sampling approaches and provides a more compact and efficient function representation. during the combined sampling, the polygonal approximation with a given threshold α between the adjacent segments is constructed. the automated detection and treatment of the discontinuities based on the lr criterion are involved. two implementations, the recursive-based and stack-based, are introduced. finally, several tests of the proposed algorithms for the different functions involving the discontinuities and several map projection graticules are presented. the proposed method may be used for more efficient sampling the curves (map projection graticules, contour lines, or buffers) in geoinformatics. keywords: function; adaptive sampling; combined sampling; recursive approach; stack; discontinuity; polygonal approximation; visualization; map projection; plotting; gis; octave; mathematica. 1. introduction a function y = f(x) on interval ω = [a,b] may have different form. for efficient plotting, its polygonal approximation needs to be constructed. a current approach concentrated on uniform sampling with the step δx may not be sufficient. despite its popularity, the equally spaced points cannot describe its course without errors; the problems of undersampling or oversampling are common. adaptive sampling brings several benefits, it adapts to a different curvature of the function, reduces the amount of redundant data and provides a natural and smooth plot of the function without the jumps and breaks. this technique is popular in computer graphics; recall the well-known decasteljau or chaikin’s algorithms for the curve approximation. combining the uniform and adaptive sampling approaches their advantages can be synthesized. the resulted representation is more compact, efficient and smooth. a function may contain points of discontinuities that need to be detected. a subdivision of the given interval ω, to the set of disjoint subintervals ωgk without the internal singularities, containing only “good” data needs to be undertaken. in other words, the polygonal approximation of the function needs to be split into the several disjoint parts. the proposed method works by the requirements mentioned above. a broad set of the singularities can be detected and treated using the multiple criteria. subsequently, for each interval ωgk, a polygonal approximation of f(x) is constructed using combined sampling. since there are many sophisticated solutions built-in the high-end systems (mathematica, maple), our simple algorithm based on the recursive approach is efficient and easy-to-implement. this paper is organized as follows. in section 3, the combined sampling technique for 1d functions is presented. section 4 deals with the detection of singularities, section 5 describes the combined sampling of the discontinuous functions. in section 6, the combined sampling geoinformatics fce ctu 17(2), 2018, doi:10.14311/gi.17.2.2 9 http://orcid.org/0000-0001-6279-1926 http://dx.doi.org/10.14311/gi.17.2.2 http://creativecommons.org/licenses/by/4.0/ t. bayer: efficient plotting the functions with discontinuities technique is tested on the set of several functions. subsequently, its behavior on four map projections is analyzed. 2. related work there are several strategies for plotting the function y = f(x) on interval ω = [a,b]. the naive approach based on sampling of f in a fixed amount of the equally spaced points is described in [20]. the simple functions suffer from oversampling, while the oscillating curves are under-sampled; these issues are mentioned in [14]. another approach based on the interval constraint plot constructing a hull of the curve was described in [6], [13], [20]. the automated detection of a useful domain and a range of the function is mentioned in [41]; the generalized interval arithmetic approach is described in [40]. a significant refinement is represented by adaptive sampling providing a higher sampling density in the higher-curvature regions. the are several algorithms for the curve interpolation preserving the speed, for example: [37], [42], [43]. the adaptive feed rate technique is described in [44]. an early implementation in the mathematica software is presented in [39]. by reducing data, these methods are very efficient for the curve plotting. the polygonal approximation of the parametric curve based on adaptive sampling is mentioned in the several papers. the refinement criteria, as well as the recursive approach, are discussed in [15]. an approximation by the polygonal curves is described in [7], the robust method for the geometric and spatial approximation of the implicit curves can be found in [27], [10], the affine arithmetic working in the triangulated models in [32]. however, the map projections are never defined by the implicit equations. similar approaches can be used for graph drawing [21]. other techniques based on the approximation by the breakpoints can be found in many papers: [33], [9], [3]; these approaches are used for the polygonal approximation of the closed curves and applied in computer vision. 3. combined sampling in this section, the proposed combined sampling technique providing the polygonal approximation of the parametric curve involving the discontinuities will be presented. the modified method will be used for the function f(x) reconstruction and plot. based on the ideas of splitting the domain into the subintervals without the discontinuities, it represents a typical problem solvable by the recursive approach. 3.1. polygonal approximation of the curve let y = f(x), m ⊂ r, f : m → r be a function of a real variable x, and the set m represents the domain of the function f. let ω = [a,b], a ∈ m, b ∈ m be the subdomain inside which the polynomial approximation pi = (xi,f(xi)), 1 ≤ i ≤ n of the curve f is constructed, where x1 = a < x1 < ... < xn = b. this approach leads to a discrete reconstruction of f from the set of sampled points. the behavior of f should be reconstructed concerning its curvature. the classical approach based on uniform sampling from the equidistant points xi with the step δ, where δ = xi+1−xi, provides a good approximation only if δ → 0. for the straight parts of the curve, many geoinformatics fce ctu 17(2), 2018 10 t. bayer: efficient plotting the functions with discontinuities -4 -3 -2 -1 0 x -2 -1 0 1 2 f( x ) as: α =1°, 83 sampled points -4 -3 -2 -1 0 x -2 -1 0 1 2 f( x ) as: α =5°, 23 sampled points -4 -3 -2 -1 0 x -2 -1 0 1 2 f( x ) as: α =10°, 11 sampled points -4 -3 -2 -1 0 x -2 -1 0 1 2 f( x ) us: α =1°, 629 sampled points -4 -3 -2 -1 0 x -2 -1 0 1 2 f( x ) us: α =5°, 117 sampled points -4 -3 -2 -1 0 x -2 -1 0 1 2 f( x ) us: α =10°, 63 sampled points figure 3.1: adaptive (as) and uniform (us) sampling of the meridian of the longitude λ = −180◦ in the sanson projection for angles αi = 1◦, 5◦, 10◦. almost colinear segments are constructed; too much redundant data is generated. conversely, for larger δ, the shape of the function in the high-curvature areas may not be captured adequately, which is discussed in [15]. in general, the main disadvantages of uniform sampling, the problems of undersampling or oversampling, are referred. for a current density of the sample, the equally spaced points cannot describe the function course without errors. 3.2. combined sampling technique by avoiding colinear segments as well as a better adaptation to the different curvature, adaptive sampling respects the behavior of the function more naturally. it leads to the more compact data representation of the curve while its shape, as well as its aesthetic look, are preserved. a comparison of adaptive and uniform sampling for the meridian curve is illustrated in fig. 3.1. uniform sampling requires more data to maintain the same curvature represented by the angle αi between the adjacent segments (pi−1,pi) and (pi,pi+1). the difference increases depending on the curvature. in general, for adaptive sampling, the required amount of points is about one order less. unfortunately, two issues are referred in [15]: 1. for specific functions, some narrow subintervals in the early iterations may be skipped and stay unprocessed. 2. for some periodic functions, a refinement based on the iterative subdivision into the segments of the same length may not be successful, if the function values at these points are equal. to avoid the problems in the first case, it is natural to take advantage of both methods geoinformatics fce ctu 17(2), 2018 11 t. bayer: efficient plotting the functions with discontinuities -2 -1 0 1 2 x 0 0.5 1 1.5 2 2.5 3 3.5 4 f( x ) as: d=1 -2 -1 0 1 2 x 0 0.5 1 1.5 2 2.5 3 3.5 4 f( x ) as: d=2 -2 -1 0 1 2 x 0 0.5 1 1.5 2 2.5 3 3.5 4 f( x ) as: d=3 figure 3.2: illustration of the proposed algorithm: an adaptive sampling of the function y = x2, x ∈ [−2, 2] for the depths of recursion d = 1, 2, 3. and propose the combined sampling method. uniform sampling represents the initial step, later steps refining the curve approximation are provided by adaptive sampling. the second problem may overcome adding the partial randomness to the generated segments. refinement criteria. an important role is played by the refinement criteria smoothing the polyline [15]. suppose (pi−1,pi,pi+1) to be three consecutive sampled points of the curve. the primary criterion is represented by the angular difference of both segments αi = α(pi−1,pi,pi+2) = arccos |u ·v| ‖u‖‖v‖ ±π, u = pi+1 −pi, v = pi−1 −pi. different criteria can provide the analogous results: recall the distance of pi from pi−1,pi+1, or, the local length ratio [29]. combined sampling algorithm without the singularities the combined sampling algorithm is based on the idea of the hierarchical reconstruction of the curve shape, which follows the recursive approach with the multiple calls mixing the uniform and adaptive techniques. unlike a simple algorithm discussed in [15], the proposed method can handle the singularities and discontinuities of f and requires a lower recursion depth. initially, sampling without the treatment of singularities will be presented. suppose d to be the current depth of the recursion, d to be the minimum and d to be the maximum recursion depth. combined sampling returns a polynomial approximation of the curve by the refinement criteria α and the recursion depth d. our algorithm combines the uniform and adaptive sampling techniques and subdivides ω into a specified number of the disjoint subintervals ωk of the similar size during each recursive step, where k = 4. hence, ω is split into the approximate quarters with the randomly shifting borders, which are not a direct multiple of 0.25. initially, if d ≤ d the interval is subdivided into four subintervals regardless of α; four new segments of the polygonal approximation are created. subsequently, when d > d, between geoinformatics fce ctu 17(2), 2018 12 t. bayer: efficient plotting the functions with discontinuities algorithm 1 combined sampling, the initial phase. 1: function csinit(f,l,a,b,d,d,d,�,α) 2: l = ∅ 3: ya = f(a),yb = f(b) 4: if discontinuity in a then 5: throw singularityexception (a) 6: if discontinuity in b then 7: throw singularityexception (b) 8: l ← point(a,ya) 9: as(f,l,a,b,ya,yb,d,d,d,�,α) 10: l ← point(b,yb) each pair of four new consecutive segments, the refinement criterion αk, k = 1, ..., 3, is evaluated and compared to α. if αk > a and b−a > ε, two adjacent segments are created. the interval is subdivided into 2-4 new until the visually “smooth” polynomial approximation of the curve is obtained. in other words, uniform sampling is followed by adaptive sampling refining the properties of the insufficiently estimated segments. let l = {pi}ni=1 be the polynomial approximation of f(x) and ω = [a,b] a subdomain. the algorithm may be summarized as follows: 1. the initial phase let l = ∅ be the empty set. compute ya = f(a) and yb = f(b). if a singularity in a or b is detected, throw the exception. add the initial vertex to l: l ← pa, where pa = (xa,ya). set the recursion depth d = 1. 2. the recursive step enter the recursive procedure and do the following substeps: (a) if d > d or b−a < ε stop the recursive procedure and go to step 3. (b) for a given ω = [a,b], the interval is split by the three points x1 = a + 1 2 r1(b−a), x2 = a + r2(b−a), x3 = a + 3 2 r3(b−a), into the approximate quarters ω1 = [a,x1], ω2 = [x1,x2] ω3 = [x2,x3], ω4 = [x3,b], where r1, r2, r3 are the random numbers inside the interval [0.45, 0.55]. this step prevents a situation, when f(a) = f(b) = f(x1) = f(x2) = f(x3), but their intermediate points do not held this condition; it is typical for some periodic functions (for example, if y = sin 2x, and ω = [0, 2π]). (c) if a singularity in x1, x2, or x3 occurs, throw the new exception with the argument indicating the singularity. (d) evaluate the function values y1 = f(x1), y2 = f(x2), and y3 = f(x3) at new vertices p1, p2, p3. geoinformatics fce ctu 17(2), 2018 13 t. bayer: efficient plotting the functions with discontinuities algorithm 2 combined sampling, the recursive phase. 1: function cs(f,l,a,b,ya,yb,d,d,d,α) 2: if d > d∨ (b−a < ε) then 3: return 4: r1 = rand(0.45, 0.55), r2 = rand(0.45, 0.55), r3 = rand(0.45, 0.55) 5: x1 = a + 12r1(b−a), x2 = a + r2(b−a), x3 = a + 3 2r3(b−a) 6: if discontinuity in xi then 7: throw singularityexception (xi), i = 1, 2, 3 8: y1 = f(x1), y2 = f(x2), y3 = f(x3) 9: pa = point(a,ya), pb = point(b,yb), pi = point(xi,yi), i = 1, 2, 3 10: α1 = α(pa,p1,p2), α2 = α(p1,p2,p3), α3 = α(p2,p3,pb) 11: if (α1 > α) ∨ (d <= d) then 12: cs(f,l,a,x1,ya,y1,d + 1,d,d,α); 13: l ← point(x1,y1) 14: if (α1 > α) ∨ (α2 > α) ∨ (d <= d) then 15: cs(f,l,x1,x2,y1,y2,d + 1,d,d,α); 16: l ← point(x2,y2) 17: if (α2 > α) ∨ (α3 > α) ∨ (d <= d) then 18: cs(f,l,x2,x3,y3,y3,d + 1,d,d,α); 19: l ← point(x3,y3) 20: if (α3 > α) ∨ (d <= d) then 21: cs(f,l,x3,xb,y3,yb,d + 1,d,d,α); (e) check the refinement criteria α1 = α(pa,p1,p2), α2 = α(p1,p2,p3), α3 = α(p2,p3,pb), and the recursive depth d. when the curve is not sufficiently smooth, or d ≤ d, it needs to be refined. for d ≤ d, this step begins with uniform sampling; for d > d it transforms to adaptive sampling. (f) if α1 > α, call the recursive procedure with the increased depth d = d + 1 for the interval [a,x1). (g) add new point p1 to the polynomial approximation of f(x): l ← p1. (h) if α1 > α∨α2 > α, call the recursive procedure with the increased depth d = d+ 1 for the interval (x1,x2). (i) add new point p2 to the polynomial approximation of f(x): l ← p2. (j) if α2 > α∨α3 > α, call the recursive procedure with the increased depth d = d+ 1 for the interval (x2,x3). (k) add new point p3 to the polynomial approximation of f(x): l ← p3. (l) if α3 > α, call the recursive procedure for the interval (x3,xb]. 3. final step add the last point pb to the polynomial approximation of f(x) : l ← pb and finish the combined sampling procedure. geoinformatics fce ctu 17(2), 2018 14 t. bayer: efficient plotting the functions with discontinuities method m3. by summarizing the facts mentioned above the proposed algorithm uses a triple recursion. in each step, the refined f(x) is approximated by at least three new points. for the pseudocode, see algs. 1, 2. compared to [15], this solution has a dramatically improved performance and requires fewer subdivisions. due to the significantly higher recursion depth d, for fast and efficient estimation of the f(x) curvature, a single recursive step is not sufficient. this issue is illustrated in tab. 1; our method is labeled as m3, a single recursion step method as m1. 4. singularity detection this section describes several simple strategies for handling and detecting the discontinuities; their overview can be found in [24], [2], [26]. early methods are based on the markov models [18], [28], [25]. currently, there are several approaches: applications of the fourier transformation [5], [12], [4], [23], [17], [16], [11], wavelets [35], [45], chebyshev series [31], triangulations [19]. geometric measures of the curvature are described in [38], [22], [34], [1], [30], [36], the statistical-based methods in [8], [26]. unlike the solutions searching for all discontinuities of f at entire ω, each sampled point xi is checked for a discontinuity; the removable, jump and infinite discontinuities are involved. these testing criteria are local, they analyze behavior of the function in a boundary b(xi,ε) of xi, covered by the equally spaced points xi−2 = xi −2h, xi−1 = xi −h, and, xi+1 = xi + h, xi+2 = xi + 2h, where h = ε/2. for practical computation ε = 0.001. an infinite discontinuity is detected if (|fi−k| > y) ∨ (fi−k ≡±inf) ∨ (fi−k ≡ nan) , k = −2, .., 2. where y is the given threshold, nan and inf are the symbols for the positive infinity and the result of the undefined operation. the remaining discontinuities may be found using the criteria measuring the smoothness by changes in the variation. two criteria, weno and lr, described in [30], are presented. the weno criterion wj(x), j = 0, 1, 2, is written as wj(x) = αi∑2 j=0 αi , αj = 1 (isj + �)2 , where is0 = 13 12 (fi−2 − 2fi−1 + fi)2 + 1 4 (fi−2 − 4fi−1 + fi)2, is1 = 13 12 (fi−1 − 2fi + fi+1)2 + 1 4 (fi−1 −fi+1)2, is2 = 13 12 (fi+2 − 2fi+1 + fi)2 + 1 4 (fi+2 − 4fi+1 + fi)2. if w0(x) > 1/3∨w1(x) > 1/3∨w2(x) > 1/3, f is probably not smooth at xi. the lr criterion has the following form lr(x) = ∣∣f2r −f2l ∣∣ f2r + f2l , (4.1) where fr = 3fi −4fi+1 + fi+2, fl = 3fi −4fi−1 + fi−2. if lr > 0.8, f is probably not smooth at xi. for practical computations, the criteria provide similar results. however, the weno criterion seems to be more sensitive, and a steep slope classifies as the jump discontinuity. this issue is widely discussed in [30]. geoinformatics fce ctu 17(2), 2018 15 t. bayer: efficient plotting the functions with discontinuities table 1: the depth of recursion for different values of the refinement criteria α in the methods m1 and m3 (proposed). method refinement criterion α[ ◦] 20 15 10 5 2 1 0.5 0.2 0.1 m1 7 9 9 19 55 105 217 535 1071 m3 3 3 3 7 15 27 57 121 297 5. combined sampling with the singularities the proposed method checks for a discontinuity at each sampled point xi ∈ ω using the rules mentioned above, where the amount of discontinuities denoted as k is not a priori known. it does not represent a rigid mathematical solution describing the behavior of the analyzed function, but only a simple method applicable to the technical computing. our approach follows a heuristic sufficient for most functions, especially for the meridian or parallel coordinate functions. as a result, the set of disjoint subsets ωg, ωg ⊆ ω, containing “good” data that allows for adaptive sampling is constructed; this technique was used in [26]. the point xi is classified as “good” if no singularity at f(xi) occurs. an interval ω containing only “good” points is classified as “good” and labeled as ωg. unfortunately, the procedure cannot be generalized for higher-dimensional problems. suppose the j − th interval ωj = [aj,bj] containing a singularity c, c ∈ ωj, and ε, ε > 0, representing the numerical threshold. in general, several cases need to be distinguished: • case 1: the coincidence with the lower bound if aj ≡ c, the discontinuity c coincides with the lower bound of the interval ωj. • case 2: the proximity to the lower bound if aj < c∧|c−aj| < ε, the discontinuity c is close the lower bound of the interval ωj. • case 3: the coincidence with the upper bound if bj ≡ c, the discontinuity c coincides with the upper bound of the interval ωj. • case 4: the proximity to the upper bound if bj > c∧|c− bj| < ε, the discontinuity c is close the upper bound of the interval ωj. • case 5: the interior singularity if aj < c < bj, where |c−aj| > ε∧|c− bj| > ε, the discontinuity c lies inside the interval ωj. for practical computation in the floating-point arithmetic, cases 1, 2 and cases 3, 4 may be joined, and we search for the discontinuity close to the lower or upper bounds. the modified conditions are: • cases 1+2: the proximity/coincidence to the lower bound if aj ≤ c∧|c−aj| < ε, the discontinuity c coincides or it is close to the lower bound of the interval ωj. geoinformatics fce ctu 17(2), 2018 16 t. bayer: efficient plotting the functions with discontinuities • cases 3+4: the proximity/coincidence to the upper bound if bj ≥ c∧|c− bj| < ε, the discontinuity c coincides or it is close to the upper bound of the interval ωj. another two essential cases need to be resolved: • case 6: too narrow interval if bj −aj < ε, an “empty” interval ωj arises. • case 7: the incorrect interval if aj > bj, the incorrect interval ωj appears. in general, they occur due to the behavior of the sampled function as well as a result of the floating-point arithmetic. for our problem, the empty interval is “unpromising” and (probably) does not contain any important data1. moreover, there is no chance of the possible improvement; the interval cannot be expanded. in most cases, the “incorrect” intervals are also empty, or they become empty during the next processing. in general, these intervals may be rejected from further processing; cases 6, 7 can be solved simultaneously. depending on the position of the singularity c, three types of operations are performed: • delete empty/incorrect ωj the empty or incorrect interval ωj is deleted. • shrinking ωj the interval ωj with the discontinuity c close to the bounds is shrunk from the left/right. for cases 1+2, aj = aj + ε, for cases 3+4, bj = bj −ε. • splitting ωj the interval ωj with the internal discontinuity c is split so that the new disjoint intervals ωj,1 = [aj,c−ε], and ωj,2 = [c + ε,bj) are created. it is obvious that the case 6 appears as the result of the incorrect split or shrink operations, while the case 7 is the result of the incorrect shrink operation. 5.1. combined sampling algorithm involving the singularities let us summarize the facts mentioned above into the algorithm. our implementation is based on the stack s, see alg. 3, the amount of ωj splits denoted s represents the recursion depth. the basic idea is to set ωg ≡ ω, to loop over all the adaptively sampled points pi, to check for a singularity c in the boundary b(xi,ε) of pi and to make a decision about the ωj boundaries. if no singularity occurs, all points are classified as good, and the polygonal approximation lj of the curve is constructed. all disjoint polygonal approximations lj are stored in the list l. the discontinuities are localized successively, one by one. the stack-based approach consists of two phases (initial and recursive): 1however, for some specific types of non-continuous functions this data may be important (functions with isolated points) and cannot be skipped. geoinformatics fce ctu 17(2), 2018 17 t. bayer: efficient plotting the functions with discontinuities algorithm 3 combined sampling with the singularities, the stack implementation. 1: function cssingstack(f, l,aj,bj,s,d,d,ε,α) 2: s = ∅,s = 0 3: s ← ωj = [aj,bj] 4: while s 6= ∅ do 5: ωj = s.pop() 6: try 7: lj = ∅ 8: csinit(f,lj,aj,bj, 1, 1,d,d,ε,α) 9: l ← lj 10: catch (singularityexception e) 11: c = e.x 12: if (s > s) then 13: l = ∅ 14: return 15: else 16: k = 0,ωj,1 = ωj,ωj,2 = ωj 17: processint(ωj,c,ωj,1,ωj,2,k,s,ε) 18: if (k > 0) then 19: s ← ωj,1 20: else if (k > 1) then 21: s ← ω2,1 1. the initial phase initialize the empty stack s = ∅. create ωg = [a,b] and push s ← ωg. 2. the recursive steps repeat the following steps until s is empty: (a) pop the actual good interval ωgj ← s from s and get aj, bj. (b) create the empty list lj = ∅. (c) create the temporary polygonal approximation of f on ωj = [aj,bj] using combined sampling stored in lj. (d) if no discontinuity appears, add lj to l: l ← lj and go to step (a). otherwise, c represents the discontinuity that must be treated in steps e-h). (e) if s > s, the maximum allowed recursion depth is exceeded without a reasonable solution. clear the polygonal approximation l. (f) otherwise, initialize the newly created intervals ωj,1 = [aj,1,bj,1], ωj,2 = [aj,2,bj,2], as ωj,1 = ωj, ωj,2 = ωj, and the amount of created intervals as k = 0. (g) call the function processint with parameters ωj, c, ωj,1, ωj,2, k, s, ε refining the interval ωj; see alg. 4. the subintervals ωj,1, ωj,2 are passing by reference. geoinformatics fce ctu 17(2), 2018 18 t. bayer: efficient plotting the functions with discontinuities (h) if at least one new interval needs to be created, then k > 0. push ωj,1 to the stack: s ← ωj,1. if k > 1, push the second interval ωj,2 to the stack: s ← ωj,2. processing the interval. the procedure processint(ωj,c,ωj,1,ωj,2,k, s,ε) refines the interval ωj. depending on the c value, the ωj bounds are shifted, or ωj is split to ωj,1, ωj,2. it can be summarized as follows: 1. if aj > bj, ωj represents an incorrect interval, return. 2. if |bj −aj| < ε, ωj represents an “empty” interval, return. as mentioned above, for some functions with the multiple jump discontinuities the empty interval cannot be skipped. 3. if aj ≤ c∧|c−aj| < ε, the discontinuity c is close to the lower bound of the interval ωj. shift the lower bound aj,1 = aj + ε of ωj,1 and increase the amount of created intervals k = k + 1. 4. if bj ≥ c∧|c− bj| < ε, the discontinuity c is close to the upper bound of the interval ωj. shift the upper bound bj,1 = bj −ε of ωj,1 and increase the amount of created intervals k = k + 1. 5. if aj < c < bj, then |c−aj| > ε∧|c− bj| > ε, the discontinuity c is inside the interval ωj which needs to be split to ωj,1, ωj,2. shift the upper bound bj,1 = c − ε of ωj,1 and the lower bound aj,2 = c + ε of ωj,2. increase the amount of the created intervals k = k + 2 and splits s = s + 1. 6. if at least one new interval needs to be created, then k > 0. push ωj,1 to the stack: s ← ωj,1. if k > 1, push the second interval ωj,2 to the stack: s ← ωj,2. for the implementation, see alg. 4. analogously, the singularity value c is stored in the thrown exception, and processed in the try-catch block. in general, the stack-based implementation is more stable than a common recursion and does not suffer from too high value of the recursion depth d; especially for the small values of ε. 5.2. utilization in geoinformatics the proposed methods may be used for the polygonal approximation of curves when a more compact and efficient representation is required. the circles, circular arcs, ellipses or offsets of curves (e.g., buffers) cannot be internally stored in the *.shp files. it can also be used for the contour lines simplification (removing the adjacent segments, where αi < α). another utilization for which the algorithm was originally developed, is a more efficient reconstruction of the map projection graticule. the coordinate functions f(ϕ,λ), g(ϕ,λ) of the map projection depend on ϕ,λ and may contain several discontinuities. therefore, the problem needs to be generalized, and its solution must be adapted for these facts. the algorithm for the map projection graticule construction will be presented in the next paper. 6. experiments and results. the proposed methods for combined sampling have been tested for the set of 9 functions; the ability to detect and treat the discontinuities represents an important factor. geoinformatics fce ctu 17(2), 2018 19 t. bayer: efficient plotting the functions with discontinuities algorithm 4 processing the interval: shift bounds or split. 1: function processint(ωj,c,ωj,1,ωj,2,k,s,ε) 2: if (aj > bj) then 3: return 4: if |bj −aj| < ε then 5: return 6: if (aj ≤ c) ∧ (|c−aj| ≤ ε) then 7: aj,1 = aj + ε 8: k = k + 1 9: else if (bj ≥ c) ∧ (|c− bj| ≤ ε) then 10: bj,1 = bj −ε 11: k = k + 1 12: else if (aj < c < bj) ∧ (|c−aj| > ε) ∧ (|c− bj| > ε) then 13: bj,1 = c−ε,aj,2 = c + ε 14: k = k + 2 15: s = s + 1 6.1. list of functions in all cases is ω = [−5π, 5π], ε = 0.001, α = 1◦, the thresholds s, d have not been set. the first function f1(x) = { 1 + x, x ≥ 0, 0, x < 0, has the jump discontinuity at x = 0. the polyline representation contains only 4 points divided into two intervals ωg1 = [−5π,−1.3 · 10−4], ω g 2 = [1.6 · 10−4, 5π], the amount of splits is s = 1, no recursion is required d = 0. the second function f2 (x) = e −500x2, is quite steep and does not contain any discontinuity. these facts led to the polygonal approximation formed by 266 points, ωg = ω, no splits, s = 0, the maximum recursion depth is d = 3. the third function f3 (x) = sin(x)/x, has a removable discontinuity at x = 0, where f(0) = 1. ω is divided into two subintervals ω g 1 = [−5π,−1.3 · 10−4], ω g 2 = [1.6 · 10−4, 5π], the amount of splits is s = 1, no recursion required d = 0, the polygonal approximation contains 122 points. the fourth function f4(x) = x/ sin(x), has jump discontinuities at x = ±π ± kπ. hence, ω is split into 9 sub intervals: ωg1 = [−15.692,−12.579], ωg2 = [−12.554,−9.434], ω g 3 = [−9.415,−6.290], ω g 4 = [−6.277,−3.145], ω g 5 = [−3.138, 3.138], ω g 6 = [3.145, 6.277], ω g 7 = [6.290, 9.415], ω g 8 = [9.434, 12.554], ω g 9 = [12.579, 15.692]. the amount of splits s = 8 as well as the recursion depth d = 6, provide the polygonal approximation containing 984 points. the next function f5(x) = x sin(5/x), geoinformatics fce ctu 17(2), 2018 20 t. bayer: efficient plotting the functions with discontinuities -10 0 10 x 0 5 10 15 20 f( x ) f1(x), 4 sampled points -10 0 10 x -10 -5 0 5 10 f( x ) f2(x), 266 sampled points -10 0 10 x -10 -5 0 5 10 f( x ) f3(x), 122 sampled points -10 0 10 x -10 -5 0 5 10 f( x ) f4(x), 984 sampled points -10 0 10 x -5 0 5 10 f( x ) f5(x), 6032 sampled points -10 0 10 x -10 -5 0 5 10 f( x ) f6(x), 755 sampled points -10 0 10 x -10 -5 0 5 10 f( x ) f7(x), 163 sampled points -10 0 10 x -10 -5 0 5 10 f( x ) f8(x), 1948 sampled points -10 0 10 x -10 -5 0 5 10 f( x ) f9(x), 1109 sampled points figure 5.1: the polygonal approximation of functions f1(x), ...,f9(x) created by the combined sampling technique; the discontinuities involved. has a discontinuity at x = 0, and f(0) = 0. ω should be divided in two subintervals, but the algorithm creates 12 subintervals in 42 splits: ωg1 = [−15.708,−0.015], ω g 2 = [−0.009,−0.008], ω g 3 = [−0.007,−0.006], ω g 4 = [−0.005,−0.004], ω g 5 = [−0.004,−0.003], ω g 6 = [−0.003, 0.003], ω g 7 = [0.003, 0.004], ω g 8 = [0.004, 0.005], ω g 9 = [0.005, 0.006], ω g 10 = [0.006, 0.007], ω g 11 = [0.008, 0.009], ωg12 = [0.015, 15.708]. as a result of the recursion depth d = 9, the total amount of points is n = 6032. while the graphical representation is aesthetically pleasing, the polygonal representation contains redundant data; this is due to the false detection of discontinuities provide by lr criterion. the next function f6(x) = 1/(tan 2x tan 0.5x), has the infinite singularities at x = ±1/2π ± kπ, and at x = 0 ± k2π (total 15). the adaptive sampling procedure with the recursion depth d = 7 creates 755 points in 16 new intervals ωg1 = [−15.708,−14.138, ω g 2 = [−14.137,−12.598], ω g 3 = [−12.535,−10.996], ω g 4 = [−10.995 − 7.855], ωg5 = [−7.853,−6.315], ω g 6 = [−6.251,−4.713], ω g 7 = [−4.712,−1.571], ω g 8 = [−1.570,−0.032], ω g 9 = [0.032, 1.570], ω g 10 = [1.571, 4.712], ω g 11 = [4.713, 6.251], ω g 12 = [6.315, 7.853], ωg13 = [7.855, 10.995], ω g 14 = [10.996, 12.535], ω g 15 = [12.598, 14.137], ω g 16 = [14.138, 15.708] with s = 15 splits. the next function f7(x) = e−2x/(x− 1), geoinformatics fce ctu 17(2), 2018 21 t. bayer: efficient plotting the functions with discontinuities algorithm 5 plotting the function f9(x) involving discontinuities in the octave v. 4.4 scripting language. xmin = -5*pi; xmax = -xmin; eps = 0.001; lrmax = 0.8; x = xmin:0.001:xmax; f = @(x)exp(x)./tan(x); y = f(x); d = lr( f, x, eps) > lrmax; x(d==1) = nan; y(d==1) = nan; plot(x,y,’-r’); xlim([xmin,xmax]) ylim([xmin,xmax]) set(gca,’xlim’,[xmin xmax]) set(gca,’ylim’,[xmin xmax]]) xlabel(’x’); ylabel(’f(x)’); axis equal; has the infinite singularity at x = 1, ω is divided into two subintervals ωg1 = [−4.286, 1.000], ω g 2 = [1.000, 15.708]. the amount of splits is s = 1 as well as the recursion depth d = 6, provide the polygonal approximation by 163 points. the next function f8(x) = x2 sin 2x/(2x− 1), has the infinity singularity at x = 0.5, ω is divided into two subintervals ωg1 = [−15.708, 0.500], ω g 2 = [0.500, 15.708]. the amount of splits is s = 1 and the recursion depth d = 7 bring the polygonal approximation by 1948 points. the last function f9(x) = ex/ tan x, has infinite singularities at x = ±kπ (total 11). the adaptive sampling procedure with the recursion depth d = 6 creates 1109 points in 10 new intervals ωg1 = [−15.708,−12.567], ω g 2 = [−12.566,−9.425], ωg3 = [−9.425,−6.283], ω g 4 = [−6.2837,−3.142], ω g 5 = [−3.141,−0.001], ω g 6 = [0.001, 3.119], ω g 7 = [3.165, 5.925], ω g 8 = [7.224, 8.138], ω g 9 = [10.979, 11.012], ω g 10 = [14.137, 14.138], with s = 10 splits. the polygonal approximations of all analyzed functions can be found in fig. 5.1. it is evident that their courses have been estimated correctly. however, for some functions, the proposed combined sampling algorithm brings less redundant representation (f5). in general, this method cannot compete with the well-known high-end solutions (mathematica) involving the robust numerical techniques, but it may provide a reasonable representation of functions in technical computing. 6.2. comparison with other systems for comparison, the functions will be plotted in two well-known systems. while the opensource software is represented by octave (v. 4.4), the commercial software by wolfram mathematica (v. 11). geoinformatics fce ctu 17(2), 2018 22 t. bayer: efficient plotting the functions with discontinuities x f x x f x x f x x f x x f x x f x x f x x f x x f x f 1 (x) f 2 (x) f 3 (x) f 4 (x) f 5 (x) f 6 (x) f 7 (x) f 8 (x) f 9 (x) figure 6.1: functions f1(x), ...,f9(x) plotted in the wolfram mathematica software, v. 11. octave. unfortunately, the open-source software octave (v. 4.4) does not support a correct plotting the functions involving discontinuities. the main idea of our solution is based on splitting ω to the subintervals ωg by putting nan numbers between ωg, where the function is undefined. the discontinuities are detected by the lr criterion given by eq. 4.1. from a mathematic point of view, more characteristics of the function behavior need to be studied (first and second derivatives, asymptotes, local/global minima, ...), but this is outside the scope of the paper. the script can be found in alg. 5. the previous results set the numerical characteristics: ω = [−5π, 5π], ε = 0.001, lr = 0.8, sampling step h = 0.001. a curve is sampled uniformly by 10000π (approx. 31 000) points; this “relatively large” value has been set empirically. in general, the obtained results are analogous to the proposed method. unfortunately, the algorithm is sensitive to the values of h, and lr. for the larger values of h, h = 0.01, only a subset of functions is sampled geoinformatics fce ctu 17(2), 2018 23 t. bayer: efficient plotting the functions with discontinuities table 2: uniform and combined sampling of the graticules of the equal area cylindrical, conic, azimuthal and werner-staab projections using combined sampling. the quantitative parameters are presented. projection sampling nmer npar αm αp α̃m α̃p equal area cylindrical uniform 1443 1406 0.00 0.00 0.00 0.00combined 195 190 0.00 0.00 0.00 0.00 equal area conic uniform 1443 1406 0.00 2.20 0.00 2.20combined 195 1612 0.00 5.01 0.00 1.89 equal area azimuthal uniform 1443 1406 0.00 5.00 0.00 5.00combined 195 2476 0.00 4.74 0.00 2.81 werner-staab uniform 1443 1406 9.27 5.00 3.86 2.93combined 2334 1747 4.99 5.00 2.36 2.34 correctly. mathematica. wolfram mathematica v. 11, the well-known system for technical computing, developed for three decades, supports automatic plotting the functions with the discontinuities. the script has a straightforward form; see alg. 6. for the functions f1(x), ...,f8(x) the analogous discontinuities have been detected. unfortunately, for f9, the asymptote x = −3π has not been recognized; see fig. 6.1. moreover, the asymptote x = −2π is hard to distinguish. if we understand mathematica as the reference software, our proposed algorithm may be found as a simple and efficient tool for plotting a general function of the one variable. however, many situations may appear when it fails. algorithm 6 plotting the function f9(x) involving the discontinuities in the mathematica v. 11 scripting language. xmin = -5*pi; xmax = -xmin; f[x_] := exp[x]/tan[x]}; plot[f[x], x, xmin, xmax, plotrange -> xmin, xmax, xmin, xmax, axeslabel -> x, f[x], axesorigin -> 0, 0, aspectratio -> automatic, plotstyle -> red]; 6.3. construction of the map projection graticule in the last test, where the graticules of several map projections are reconstructed, the uniform and adaptive sampling techniques are compared regarding the data representation compactness. it is measured by the amount of the sampled meridian points nmer, and parallel points npar. maximum angles αm,αp between the sampled meridian and parallel segments together with their mean values α̃m, α̃p are measured. the graticule is constructed over the entire planisphere, so ω = ωϕ×ωλ, where ϕ ∈ [−π/2,π/2], and λ ∈ [−π,π], the offsets between the meridians and parallels are ∆ϕ = ∆λ = 10◦. in uniform sampling, the sampling steps of the meridians and parallels are δϕ = δλ = 2◦; combined sampling uses α = 2◦. for the circular arcs, uniform and combined sampling provide the analogous density of points representing the polygonal approximation. on the contrary, uniform sampling brings a higher density of the sampled points for the straight lines and vice versa for most of the curves. four map geoinformatics fce ctu 17(2), 2018 24 t. bayer: efficient plotting the functions with discontinuities -18 0.0 -17 0.0 -16 0.0 -15 0.0 -14 0.0 -13 0.0 -12 0.0 -11 0.0 -10 0.0 -90 .0 -80 .0 -70 .0 -60 .0 -50 .0 -40 .0 -30 .0 -20 .0 -10 .0 0.0 10. 0 20. 0 30. 0 40. 0 50. 0 60. 0 70. 0 80. 0 90. 0 100 .0 110 .0 120 .0 130 .0 140 .0 150 .0 160 .0 170 .0 180 .0 -0.0 0.0 -90.0 -90.0 -80.0 -80.0 -70.0 -70.0 -60.0 -60.0 -50.0 -50.0 -40.0 -40.0 -30.0 -30.0 -20.0 -20.0 -10.0 -10.0 0.0 0.0 10.0 10.0 20.0 20.0 30.0 30.0 40.0 40.0 50.0 50.0 60.0 60.0 70.0 70.0 80.0 80.0 90.0 90.0 -180.0 -170.0 -160.0 -150.0 -140.0 -130.0 -120.0 -110.0 -100. 0 -90.0 -80.0 -70.0 -60. 0 -50. 0 -40. 0 -30. 0 -20. 0 -10. 0 0.0 10.0 20.0 30.0 40.0 50.0 60.0 70.0 80.0 90.0 100.0 110.0 120.0 130.0 140.0 150.0 160.0 170.0 180.0 -0.0 0.0 -90.0 -90.0 -80.0 -80.0 -70.0 -70.0 -60.0 -60.0 -50.0 -50 .0 -40.0 -40.0 -30.0 -30.0 -20.0 -20.0 -10.0 -10.0 0.0 0.0 10.0 10.0 20.0 20.0 30.0 30.0 40.0 40.0 50.0 50.0 60.0 60.0 70.0 70.0 80.0 80.0 90.0 90.0 -180.0 -170.0 -160.0 -150.0 -140.0 -130.0 -120.0 -110.0 -100.0 -90.0 -80.0 -70.0 -60.0 -50. 0 -40 .0 -30 .0 -20 .0 -10 .0 0.0 10 .0 20 .0 30. 0 40.0 50.0 60.0 70.0 80.0 90.0 100.0 110.0 120.0 130.0 140.0 150.0 160.0 170.0 180.0 -0. 0 0.0 -90.0 -90 .0 -80.0 -80 .0 -70.0 -70 .0 -60.0 -60 .0 -50.0 -50 .0 -40.0 -40 .0 -30.0 -30 .0 -20.0 -20 .0 -10.0 -10 .0 0.0 0.0 10.0 10 .0 20.0 20 .0 30.0 30 .0 40.0 40 .0 50.0 50 .0 60.0 60 .0 70.0 70 .0 80.0 80 .0 90.0 90 .0 -180.0 -170.0 -160 .0 -150 .0 -140 .0 -13 0.0 -12 0.0 -11 0.0 -10 0.0 -90 .0 -80 .0 -70 .0 -60 .0 -50 .0 -40 .0 -30 .0 -20 .0 -10 .0 0.0 10 .0 20 .0 30 .0 40 .0 50 .0 60 .0 70 .0 80 .0 90 .0 10 0.0 110 .0 120 .0 130 .0 140 .0 150.0 160. 0 170.0 180.0 -0. 0 0.0 -90.0-90.0 -80.0 -80.0 -70.0 -70.0 -60.0 -60.0 -50.0 -50.0 -40.0 -40.0 -30.0 -30.0 -20.0 -20 .0 -10.0 -10. 0 0.0 0.0 10.0 10. 0 20.0 20 .0 30.0 30 .0 40.0 40 .0 50.0 50 .0 60.0 60 .0 70.0 70 .0 80.0 80 .0 90.0 90 .0 figure 6.2: the reconstructed graticules of the equal area cylindrical, conic, azimuthal and werner-staab projections using the combined sampling technique. projections are involved in testing: equal-area cylindrical, conic (ϕ1 = 45◦), azimuthal, and werner-staab, their coordinate functions are continuous on ω. the results are summarized in tab. 2. for the straight segments, uniform sampling provides the redundant data; this issue refers to the cylindrical projection as well as to the meridians of the conic and azimuthal projections. while the constant curvature leads to the similar results (parallels of conic, azimuthal and werner-staab projections), in the higher-curvature regions, the combined sampling provides a smoother approximation (meridians of werner-staab projection). in general, combined sampling preserves the curvature better (see αm,αp, α̃m, α̃p values) and brings less redundant data which requires more sampled points. the reconstructed graticules can be found in fig. 6.2. the modified version of the algorithm treating the discontinuities in the coordinate functions f,g will be presented in the next paper. 7. conclusion this article presented a new algorithm combining the uniform and adaptive sampling techniques applicable to the functions involving the discontinuities, which are detected by the lr criterion. it can be used for the polygonal approximation of the curves when a more compact, efficient and less redundant representation is required. a typical example is represented by the circles, circular arcs, ellipses or offsets of curves, but it can be easily applicable to the map geoinformatics fce ctu 17(2), 2018 25 t. bayer: efficient plotting the functions with discontinuities projection graticule and similar problems in gis and cartography. the illustrating examples indicate that the functions involving the discontinuities widely applied in technical practice can be plotted efficiently. however, there are many more complex functions, where the proposed solutions are not sufficient and need to be refined. a typical situation is represented by the isolated point, which is currently thrown. another benefit is the relatively simple stack-based implementation. the source code written in java can be found in the github repository https://github.com/bayertom/sampling. the next paper brings an extension of this algorithm for combined sampling of the map projection graticules, when the coordinate functions f,g involve the discontinuities. references [1] francesc arandiga et al. “interpolation and approximation of piecewise smooth functions”. in: siam journal on numerical analysis 43.1 (2005), pp. 41–57. doi: 10.1137/ s0036142903426245. [2] rick archibald, anne gelb, and jungho yoon. “determining the locations and discontinuities in the derivatives of functions”. in: applied numerical mathematics 58.5 (2008), pp. 577–592. [3] andré ricardo backes and odemir martinez bruno. “polygonal approximation of digital planar curves through vertex betweenness”. in: information sciences 222 (2013), pp. 795–804. [4] nana s. banerjee, james f. geer, and langley research center. exponential approximations using fourier series partial sums [microform] / nana s. banerjee and james f. geer. english. national aeronautics and space administration, langley research center ; national technical information service, distributor hampton, va. : [springfield, va, 1997, p. 1 v. [5] robert bruce bauer. “band pass filters for determining shock locations”. in: (). [6] frédéric benhamou and william j. older. “applying interval arithmetic to real, integer, and boolean constraints”. in: the journal of logic programming 32.1 (1997), pp. 1–24. issn: 0743-1066. doi: https://doi.org/10.1016/s0743-1066(96)00142-2. [7] p. binev et al. “adaptive approximation of curves”. in: approximation theory (2004), pp. 43–57. [8] mira bozzini and milvia rossini. “the detection and recovery of discontinuity curves from scattered data”. in: journal of computational and applied mathematics 240.supplement c (2013). mata 2012, pp. 148–162. issn: 0377-0427. doi: https://doi.org/ 10.1016/j.cam.2012.06.014. [9] a carmona-poyato et al. “polygonal approximation of digital planar curves through break point suppression”. in: pattern recognition 43.1 (2010), pp. 14–25. [10] filipe de carvalho nascimento et al. “approximating implicit curves on plane and surface triangulations with affine arithmetic”. in: computers & graphics 40 (2014), pp. 36–48. geoinformatics fce ctu 17(2), 2018 26 https://github.com/bayertom/sampling https://doi.org/10.1137/s0036142903426245 https://doi.org/10.1137/s0036142903426245 https://doi.org/https://doi.org/10.1016/s0743-1066(96)00142-2 https://doi.org/https://doi.org/10.1016/j.cam.2012.06.014 https://doi.org/https://doi.org/10.1016/j.cam.2012.06.014 t. bayer: efficient plotting the functions with discontinuities [11] dennis cates and anne gelb. “detecting derivative discontinuity locations in piecewise continuous functions from fourier spectral data”. in: numerical algorithms 46.1 (2007), pp. 59–84. [12] knut s eckhoff. “accurate reconstructions of functions of finite regularity from truncated fourier series expansions”. in: mathematics of computation 64.210 (1995), pp. 671– 690. [13] m.h. van emden. “value constraints in the clp scheme”. in: constraints 2.2 (oct. 1997), pp. 163–183. issn: 1572-9354. doi: 10.1023/a:1009705709733. [14] richard fateman. “honest plotting, global extrema, and interval arithmetic”. in: papers from the international symposium on symbolic and algebraic computation. issac ’92. berkeley, california, usa: acm, 1992, pp. 216–223. isbn: 0-89791-489-9. doi: 10.1145/143242.143314. [15] luiz henrique de figueiredo. “adaptive sampling of parametric curves”. in: graphics gems v 5 (1995), pp. 173–178. [16] anne gelb and eitan tadmor. “adaptive edge detectors for piecewise smooth data based on the minmod limiter”. in: journal of scientific computing 28.2 (2006), pp. 279–306. [17] anne gelb and eitan tadmor. “spectral reconstruction of piecewise smooth functions from their discrete data”. in: esaim: mathematical modelling and numerical analysis 36.2 (2002), pp. 155–175. [18] stuart geman and donald geman. “stochastic relaxation, gibbs distributions, and the bayesian restoration of images”. in: ieee transactions on pattern analysis and machine intelligence 6 (1984), pp. 721–741. [19] tim gutzmer and armin iske. “detection of discontinuities in scattered data approximation”. in: numerical algorithms 16.2 (1997), pp. 155–170. [20] timothy j. hickey, zhe qju, and maarten h. van emden. “interval constraint plotting for interactive visual exploration of implicitly defined relations”. in: reliable computing 6.1 (feb. 2000), pp. 81–92. issn: 1573-1340. doi: 10.1023/a:1009950630139. [21] yifan hu. “efficient, high-quality force-directed graph drawing”. in: mathematica journal 10.1 (2005), pp. 37–71. [22] guang-shan jiang and chi-wang shu. “efficient implementation of weighted eno schemes”. in: journal of computational physics 126.1 (1996), pp. 202–228. [23] george kvernadze. “determination of the jumps of a bounded function by its fourier series”. in: journal of approximation theory 92.2 (1998), pp. 167–190. [24] david lee. “detection, classification, and measurement of discontinuities”. in: siam journal on scientific and statistical computing 12.2 (1991), pp. 311–341. doi: 10 . 1137/0912018. [25] david lee and grzegorz w wasilkowski. “discontinuity detection and thresholding-a stochastic approach”. in: journal of complexity 9.1 (1993), pp. 76–96. [26] licia lenarduzzi and robert schaback. “kernel-based adaptive approximation of functions with discontinuities”. in: applied mathematics and computation 307.supplement c (2017), pp. 113–123. issn: 0096-3003. doi: https://doi.org/10.1016/j.amc.2017. 02.043. geoinformatics fce ctu 17(2), 2018 27 https://doi.org/10.1023/a:1009705709733 https://doi.org/10.1145/143242.143314 https://doi.org/10.1023/a:1009950630139 https://doi.org/10.1137/0912018 https://doi.org/10.1137/0912018 https://doi.org/https://doi.org/10.1016/j.amc.2017.02.043 https://doi.org/https://doi.org/10.1016/j.amc.2017.02.043 t. bayer: efficient plotting the functions with discontinuities [27] hélio lopes, joão batista oliveira, and luiz henrique de figueiredo. “robust adaptive polygonal approximation of implicit curves”. in: computers & graphics 26.6 (2002), pp. 841–852. [28] jose marroquin, sanjoy mitter, and tomaso poggio. “probabilistic solution of ill-posed problems in computational vision”. in: journal of the american statistical association 82.397 (1987), pp. 76–89. [29] byron nakos and v miropoulos. “local length ratio as a measure of critical points detection for line simplification”. in: fifth workshop on progress in automated map generalization, paris, france. 2003. [30] m oliveria et al. “universal high order subroutine with new shock detector for shock boundary layer interaction”. in: in other words 10 (2009), pp. 1–2. [31] ricardo pachón, rodrigo b platte, and lloyd n trefethen. “piecewise-smooth chebfuns”. in: ima journal of numerical analysis 30.4 (2009), pp. 898–916. [32] afonso paiva et al. “approximating implicit curves on triangulations with affine arithmetic”. in: graphics, patterns and images (sibgrapi), 2012 25th sibgrapi conference on. ieee. 2012, pp. 94–101. [33] mohammad tanvir parvez and sabri a mahmoud. “polygonal approximation of digital planar curves through adaptive optimizations”. in: pattern recognition letters 31.13 (2010), pp. 1997–2005. [34] leszek plaskota and grzegorz w wasilkowski. “adaption allows efficient integration of functions with unknown singularities”. in: numerische mathematik 102.1 (2005), pp. 123–144. [35] m. rossini. “2d-discontinuity detection from scattered data”. in: computing 61.3 (sept. 1998), pp. 215–234. issn: 1436-5057. doi: 10.1007/bf02684351. [36] yiqing shen and gecheng zha. “improvement of the weno scheme smoothness estimator”. in: international journal for numerical methods in fluids 64.6 (2010), pp. 653– 675. issn: 1097-0363. doi: 10.1002/fld.2168. [37] moshe shpitalni, yoram koren, and cc lo. “realtime curve interpolators”. in: computeraided design 26.11 (1994), pp. 832–838. [38] kaleem siddiqi, benjamin b kimia, and chi-wang shu. “geometric shock-capturing eno schemes for subpixel interpolation, computation, and curve evolution”. in: computer vision, 1995. proceedings., international symposium on. ieee. 1995, pp. 437– 442. [39] c. smith and n. blachman. the mathematica graphics guidebook. addison-wesley, 1995. isbn: 9780201826555. url: https : / / books . google . cz / books ? id = l % 5c _ myaqaaiaaj. [40] jeffrey allen tupper. graphing equations with generalized interval arithmetic. university of toronto, 1996. [41] leland wilkinson. “algorithms for choosing the domain and range when plotting a function”. in: (1991), pp. 1–8. geoinformatics fce ctu 17(2), 2018 28 https://doi.org/10.1007/bf02684351 https://doi.org/10.1002/fld.2168 https://books.google.cz/books?id=l%5c_myaqaaiaaj https://books.google.cz/books?id=l%5c_myaqaaiaaj t. bayer: efficient plotting the functions with discontinuities [42] daniel c.h. yang and tom kong. “parametric interpolator versus linear interpolator for precision cnc machining”. in: computer-aided design 26.3 (1994). special issue:nc machining and cutter-path generation, pp. 225–234. issn: 0010-4485. doi: https : / / doi.org/10.1016/0010-4485(94)90045-0. [43] s-s yeh and p-l hsu. “the speed-controlled interpolator for machining parametric curves”. in: computer-aided design 31.5 (1999), pp. 349–357. [44] syh-shiuh yeh and pau-lo hsu. “adaptive-feedrate interpolation for parametric curves with a confined chord error”. in: computer-aided design 34.3 (2002), pp. 229–237. [45] tony chan zhou and hm zhou. “adaptive eno-wavelet transforms for discontinuous functions”. in: cam report, no. 99-21, dept. of math., ucla, submit to siam numer. anal. citeseer. 1999. geoinformatics fce ctu 17(2), 2018 29 https://doi.org/https://doi.org/10.1016/0010-4485(94)90045-0 https://doi.org/https://doi.org/10.1016/0010-4485(94)90045-0 geoinformatics fce ctu 17(2), 2018 30 t. bayer: efficient plotting the functions with discontinuities introduction related work combined sampling polygonal approximation of the curve combined sampling technique singularity detection combined sampling with the singularities combined sampling algorithm involving the singularities utilization in geoinformatics experiments and results. list of functions comparison with other systems construction of the map projection graticule conclusion