503 service temporarily unavailable 503 service temporarily unavailable nginx/1.14.1 503 service temporarily unavailable 503 service temporarily unavailable nginx/1.14.1 503 service temporarily unavailable 503 service temporarily unavailable nginx/1.14.1 503 service temporarily unavailable 503 service temporarily unavailable nginx/1.14.1 503 service temporarily unavailable 503 service temporarily unavailable nginx/1.14.1 503 service temporarily unavailable 503 service temporarily unavailable nginx/1.14.1 503 service temporarily unavailable 503 service temporarily unavailable nginx/1.14.1 welcome to studies in digital heritage    1: i welcome to the first issue of studies in digital heritage (sdh)! sdh offers professionals active in the field of digital heritage the opportunity to publish their work, at no cost, in an online, peer-reviewed open access journal with three issues per year. topics appropriate for the journal cover the entire workflow of cultural heritage studies, from discovery and documentation of monuments to analysis, interpretation, and public outreach. articles should highlight the role of digital technology in facilitating cultural heritage research and applications. sdh is especially eager to publish work that is innovative and creative in one of two ways: articles whose importance depends on the value of the cultural object studied; and those presenting innovations in digital technologies. for example, an article presenting a new insight or discovery about a key monument such as the temple of zeus at olympia would be appropriate for the journal, as long as that insight arose from the application of digital technology. equally of interest to sdh are articles about purely technical advances of direct application in one of the fields of cultural heritage. in addition to articles, sdh also publishes mediated blogs; reviews of books, software, and hardware; and review articles summarizing the state of the technology or art regarding any digital heritage topic or discussing the advantages and disadvantages of different approaches to a given task the content of blogs can vary widely, including, e.g., a comment (whether critical or constructive) about an article we have published, the announcement of an upcoming conference with related call for papers, etc. articles may be as long as 10,000 words, or, with special permission, even longer. in addition to text and images, sdh supports the following embedded media: audio, video, and interactive 3d models using a webgl solution such as 3dhop, sketchfab, or unity. once an article has been submitted, it is assigned to an editor, who, in turn, recruits a minimum of two and, ideally, three referees. identifying suitable readers can sometimes take a month or more. referees are generally given one month to write their reports. our goal is to act on submissions within two or, at most, three months of submission. as with most journals, submissions can be accepted, accepted subject to revision, or rejected. if an article is accepted subject to revision, the original referee requesting changes is consulted before the new version is accepted and published. welcome to studies in digital heritage    1: ii sdh encourages authors to team up to propose special issues for the consideration of our editorial board. we also encourage authors to volunteer to serve as reviewers and members of the editorial board. articles are published in two formats: online and a downloadable pdf. editors of special issues can arrange for printed versions to be generated through the print-on-demand service of the indiana university press. as a publication of the indiana university press (iu press), sdh offers authors free professional services including copy-editing and help with layout and design. iu press also handles advertising, publicity, and submission of the applications for obtaining an impact factor. in short, sdh is here to serve the needs of the international community of digital heritage professionals and to do so with open access, no article processing charge (apc), and no sacrifice in standards with respect to style, layout, and scientific substance. sincerely yours, bernard frischer, indiana university, co-editor-in-chief gabriele guidi, politecnico di milano, co-editor-in-chief and the following members of the editorial board: willem beex, beex, netherlands wolfgang börner, museen der stadt wien stadtarchaeologie jane w. crawford, university of virginia, usa nicoló dell’unto, lunds universitet, lund, sweden livio de luca, centre national de la recherche scientifique, marseille, france john fillwalk, idia lab ball state university, united states philippe fleury, university of caen normandy, france irmela herzog, rhineland commission for archaeological monuments and sites, bonn, germany welcome to studies in digital heritage    1: ii paolo liverani, university of florence, italy giulio magli, politecnico di milano, italy andrás patay-horváth, university eotvos lorand (elte), hungary laia pujol-tost, pompeu fabra university, spain guillaume robin, university of edinburgh, united kingdom maria roussou, assistant professor, national and kapodistrian university of athens apostolos sarris, foundation for research and technology, hellas (forth), greece rebeka vital, shenkar. design. engineering. arts, israel georg zotti, ludwig boltzmann institute for archaeological prospection and virtual archaeology, austria information integration in a mining landscape gerald hiebel, klaus hanke, gert goldenberg, caroline o. grutsch, markus staudt university of innsbruck, austria the integration of information sources is a fundamental step to advance research and knowledge about the ancient mining landscape of schwaz/brixlegg in the tyrol / austria. the approach is applied for the location, identification and interpretation of mining structures within the area. our goal is to illustrate the use of the cidoc crm ontology with extensions in combination with a thesaurus to integrate data on a conceptual level. to implement this integration, we applied semantic web technologies to create a knowledge graph in rdf (resource description framework) that currently represents the available information of seven different sources in a network structure. more sources will eventually be integrated using the same methodology. these include geochemical analysis of artifacts, onomastic research on names related to mining and archaeological information on other mining areas, to research the spread of prehistoric mining activities and technologies. the rdf network can be queried for research, cultural or emergency response questions, and the results can be displayed using geoinformation systems. an example of an archaeological research question is the location of mining, settlement and burial sites in the bronze age, differentiating between ore extraction, ore processing and smelting activities. for emergency services, the names and exact locations of mines are essential in case of an accident within an old mine. different questions require different subsets of the created knowledge graph. the results of queries to retrieve specific information can be visualized using appropriate tools. key words: information integration, mining archaeology, ontology, semantic technologies, geoinformation sdh reference: gerald hiebel, et al. 2017. information integration in a mining landscape. sdh, 1, 2, 8 pages. doi: 10.14434/sdh.v1i2.23231 1. information sources the himat research center of the university of innsbruck (http://himat.uibk.ac.at) investigates the mining history of the eastern alps from prehistory to modern times. various projects of the research center in the area of schwaz/brixlegg target the location, identification and interpretation of mining structures. geological prospections are a fundamental source of information about structures originating from mining activities. herwig pirkl [pirkl 1961] thoroughly investigated the schwaz/brixlegg mining area. the result was a publication describing the geologic and surface author's address: gerald hiebel,unit for surveying and geoinformation, university of innsbruck, technikerstrasse 13, 6020, innsbruck, austria; email: gerald.hiebel@uibk.ac.at permission to make digital or hardcopies of part or all of this work is granted without fee according to the open access policy of sdh. © 2017 sdh open access journal structures of the area and containing three geological maps at the scale 1:10000. two of these maps have been digitized in the course of the work done in the himat research center. structuresidentified by pirkl as underground mining and surface mining have been registered together with their names and coordinates. in addition, information on mining structures provided by the geological survey austria [gba 2014] has been integrated (figure 1). figure 1. mining structures identified by pirkl and the geological survey austria (source: gba, 2014). to better locate structures, the high-resolution elevation model of the province of tyrol was examined for concave and convex surface structures that are in proximity to the structures identified by pirkl (figure 2). information about the archaeological sites has been extracted from archaeological literature and from project reports of archaeological prospections and excavations conducted by the himat research center (figure 3). the most recent project, “prehistoric copper production in the eastern and central alps,” contributed significantly to our knowledge of the archaeological sites related to prehistoric mining activities. to document the research in the area, we used the himat database [hiebel et al. 2013], which records the research activities conducted in the years from 2007 to 2011. figure 2. surface structures identified in the high-resolution elevation model of the province of tyrol (source: land tirol 2009). figure 3. archaeological sites in the area of schwaz/brixlegg (source: himat 2016). 2. information integration to integrate the heterogeneous information described in the previous section, we first needed a conceptual model that has the ability to represent the concepts coming from the different himat research domains such as geology, surveying, archaeology, linguistics or metallurgy. the cidoc crm ontology [le boeuf et al. 2016] was chosen because it is an event-centric data model, and we identified past mining activities and contemporary research activities (which are subclasses of events) as the essential nodes that relate research objects to the documentation and hypothesis created in archaeological research (figure 4). extensions of the cidoc crm [cidoc crm 2016 ] were used to model observations (crmsci), interpretations (crminf), geometric information (crmgeo) and digital provenance (crmdig). the classes of the model had to be refined with a thesaurus (figure 5) in order to represent the detailed information in the available documentation and to answer research questions relevant to the domain. the integration of vocabularies originating from different sources has been a serious challenge [doerr 2006]. within the dariah infrastructure (http://www.dariah.eu), an approach was developed to integrate terms within a backbone thesaurus and thus create the ability to query upper levels without the need to reach consensus on lower level terms, which is often an almost impossible task to accomplish [dariah eu 2016]. figure 4. conceptualizations used for the approach that are represented with cidoccrm classes. we used karma [isi 2016], a tool of the semantic web community, to map the information sources to the data model. figure 6 shows how the original data of the documentation, which must be provided in a structured format (either tabular or hierarchical), was mapped to the formal definitions of the cidoc crm ontology. a knowledge graph was created to represent the information, which can be exported in rdf (resource description framework), a data format that is able to relate logical statements within a network [w3c 2014]. rdf is the foundation of the linked open data (lod) cloud, where data sets are linked to each other on a global level (http://www.linkedopendata.org). in the continuation of the project, we plan to link the created resources to datasets of the lod cloud such as geonames and wikidata (the human and machine-readable representation of wikipedia). the thesaurus was created with the karma tool as well and represented in skos (simple knowledge organization system), a data model of the semantic web community for sharing and linking knowledge organization systems, such as thesauri, taxonomies, classification schemes and subject heading systems [w3c 2009]. figure 5. thesaurus examples to refine the conceptualizations. figure 6. using karma to map structured data to the formal definitions of the cidoc crm. after mapping the different information sources and the thesaurus to the common data model, the created rdf structure is placed in a triple store, which is a database to store rdf data. in the triple store, the linking of resources (single information source elements such as a specific underground mine or a concept like the early bronze age) takes place, realizing the actual integration. resources are either linked on a class level (because they belong to the same cidoc crm class, e.g., observation), on the skos concept level (because the same thesaurus term was attributed to them, e.g. , “early bronze age”) or on an individual level (because they describe the same material structure object or observation, e.g., “barbarastollen”). linking on an individual level is also known as coreference or entity matching and may involve additional processes to assess the identity of individuals, if no common identifier is available in the different data sources, which is often the case. 3. information retrieval examples the rdf network of the triple store can be queried using the sparql [w3c 2013] query language. to test the integration, we used a model archaeological research question concerning the location of mining, settlement and burial sites in the bronze age, differentiating between ore extraction, ore processing and smelting activities. the results of the query were loaded into a geoinformation system, and a map of known bronze age sites related to mining, settlement and burial activities was created (figure 7). for emergency services, names and exact locations of mines are essential in case of an accident within an old mine. a list of mines containing this information was created from the triple store and given to the emergency services. figure 8 shows the mines and their names on a map. these two application scenarios show how, for different questions, a subset of the created knowledge graph is of interest and that the relevant information can be retrieved and if necessary visualized using appropriate tools. figure 7. map of known bronze age sites related to mining, settlement and burial activities. figure 8. names and locations of mines retrieved for emergency services. 4. conclusion and outlook an approach to integrate information available within a mining landscape coming from various sources was developed. a common data model with tools and specifications of the semantic web community was used to perform the actual integration. with specific information retrieval examples, it could be shown that the integration process works and that the triple store can be used to answer specific research questions. in the current implementation, seven data sources from geology, surveying and archaeology are integrated within the himat database. further research will apply the methodology to more sources. in the near future, museum exhibition data, a linguistic dissertation about toponyms related to mining and geochemical metal analysis of prehistoric artifacts are targeted. the integrated data will be used to identify material structures or places in lod sources like geonames or wikidata through string matching processes and semantic queries. 5. acknowledgements the research presented here was financed by the austrian science fund in the course of an erwin schrödinger scholarship (j 3646-n15) and by the university of innsbruck in the course of a graduate scholarship (240346). 6. references dariah eu. 2016. dariah backbone thesaurus (bbt) -definition of a model for sustainable interoperable thesauri maintenance. thesaurus maintenance working group, vcc3, dariah eu. http://83.212.168.219/dariahcrete/sites/default/files/dariah_bbt_v_1.2_draft_v4.pdf (4.1.2017) m. doerr. 2006. semantic problems of thesaurus mapping. journal of digital information 1 (8): retrieved from https://journals.tdl.org/jodi/index.php/jodi/article/view/31/32 (3.11.2016) gba. 2014. digitale datensätze des bergbau/haldenkatasters betreffend ausgewählter bergbaugebiete im raum schwaz-brixlegg und kitzbühel-jochberg. fachabteilung rohstoffgeologie der geologischen bundesanstalt. g. hiebel, k. hanke, and i. hayek. 2013. methodology for cidoc crm based data integration with spatial data. f. contreras, m. farjas, and f. j. melero. in caa 2010: fusion of cultures. proceedings of the 38th annual conference on computer applications and quantitative methods in archaeology, granada, spain, april 2010. uk, bar international. 547-554. isi. 2016. karma: a data integration tool. , http://www.isi.edu/integration/karma/ (3.11.2016) cidoc crm. 2016. cidoc crm compatible models & collaborations. http://www.cidoc-crm.org/collaborations (9.1.2017) p. le boeuf, m. doerr, c. e. ore, and s. stead. 2016. definition of the cidoc conceptual reference model. http://www.cidoc-crm.org/official_release_cidoc.html (6.4.2016) h. pirkl. 1961. geologie des trias-streifens und des schwazer dolomits südlich des inn zwischen schwaz und wörgl (tirol), jahrbuch geol. b. a. (1961), bd. 104. 1. heft, (wien 1961). w3c. 2009. skos simple knowledge organization system reference. https://www.w3.org/tr/2009/rec-skos-reference-20090818/ (19.6.2016) w3c. 2013. sparql 1.1 overview. https://www.w3.org/tr/sparql11-overview/ (9.1.2017) w3c. 2014. resource description framework (rdf). http://www.w3.org/rdf/ (19.6.2016) maxentius 3d project lucia marsicano, saverio giulio malatesta, francesco lella, emanuela d'ignazio, eleonora massacci, and simone onofri sapienza university, italy the aim of the project is to propose a full 3d model of the circus of maxentius in rome encompassing all the aspects of the environment, as well as the architectural system. the circus is part of a complex built by maxentius at the beginning of the iv century ad. the maxentian complex is situated on the via appia between the second and the third mile; today this area is part of the parco regionale dell'appia antica, where the need to preserve the ecosystem makes it impossible to remove the vegetation. for this reason a large part of the circus is covered by the vegetation, making the survey of the entire structure impossible for researchers. the starting point was a study of the archaeological data, and then the team carried out a targeted field survey to integrate the published data with new information useful for creating a metrically correct reconstruction of the monument. to model both the landscape and the architectural structures, blender, an open source software, was utilized, and to model the statues zbrush, a proprietary software package, was used. each element was modeled using scientific evidence or, lacking that, by employing analogies. the result is a metrically and scientifically correct 3d model of the circus of maxentius useful for studying the monument from a new point of view. by integrating archaeological data and using 3d graphics, it was possible to verify the hypothesis of reconstruction of the monument. key words: archaeology, 3d reconstruction, blender, zbrush sdh reference: lucia marsicano, et al., maxentius 3d project, 2017. sdh, 1, 2, 14 pages. doi: 10.14434/sdh.v1i2.23199 1. introduction the maxentian complex, composed of the palace, the mausoleum of romulus and the circus of maxentius, is situated along the via appia between the second and the third milestones. the area is part of the regional park of the appia antica, comprised of 3500 hectares. the park includes eleven miles of the via appia, the caffarella valley (200 ha.), the archaeological area of the ancient via latina, the aqueducts park (240 ha.), the tormarancia estate (220 ha.) and the farnesina estate (180 ha.). the main purpose of the park is to preserve the ecosystem, protecting its flora and fauna. for this reason, the structure of the circus is covered by dense vegetation, and it is not possible to see the monument in its entirety. the state of preservation of the circus, its historical importance and questions about the arrangement of the monument are the reasons why our team decided to make a 3d model of the circus of maxentius (fig. 1). figure 1. circus of maxentius we began with the study of the archaeological data, especially cartography, axonometric drawings and plans. then the team carried out a limited field survey to collect more data. unfortunately, because the vegetation covers such a large part of the monument, it was not possible to significantly integrate the published data. while the plans and axonometric drawings were the starting point for gaining an understanding of the structural features of the monument, historical illustrations were also useful in proposing a likely reconstruction. also useful were a large number of drawings which show the state of preservation of the monument during the past as well as a hypothetical reconstruction. to complete the 3d model of the circus, other details were included. it was possible to insert detailed elements, like the machine used to count the laps, and some statues, thanks to fragments found during the excavations [luschi 1999]. furthermore, there are several illustrations from the roman period, such as mosaics and reliefs, which were useful for adding more details. once all the data had been collected that were useful for making an accurate reconstruction, the team went ahead with the 3d modeling using the open source software blender. zbrush, a proprietary software package that allows us to model detailed objects easily and quickly, was used to model the statues (fig. 2). figure 2. statue of venus 2. the circus of maxentius the emperor maxentius commissioned two major building programs during his brief reign: the first in the center of rome, along the via sacra, and the second just south of the city, along the via appia. since he did not have official endorsement, maxentius used these works to take advantage of his close connection with the traditional capital of the empire and to align himself with earlier emperors in order to demonstrate the legitimacy of his reign (306–312 ad) [pisani sartorio 1999]. it is revealing to focus on maxentius’ decision to construct a personal complex with a residential villa, a circus, and a mausoleum on the via appia, three kilometers outside of the city walls. maxentius’ desire to compete with the tetrarchs and their personal palaces must have impelled him to chose an extramural location so that he could design his own palace free of the spatial and ritual constraints imposed by the older imperial residences on the palatine hill. the three maxentian monuments were strongly interconnected not only in sharing the same building technique of opus vittatum, but also physically. to integrate the three elements of the complex, passageways connected the precinct of the mausoleum to both the residence on the hill above and to the circus that lay in the valley to its east. a long, covered corridor stretched diagonally across the terraced slope to connect the imperial box on the north wall of the circus to the center of the maxentian palace. two main factors influenced the composition of the complex: the topography of the area, which determined the location and orientation of the circus, and the pre-existing architectural features. the circus, built in 310–311 ad, is the site’s best known feature and is one of the best preserved examples of a roman hippodrome. the structure is oriented east-west within a natural valley between the hill of the late republican tomb of caecilia metella and the hill where the maxentian villa is sited. the circus spans a length of 520 meters, and at its widest, it measures 92 meters across. the long sides of the track are not parallel, in order to leave the racing chariots just enough space while placing the spectators as close as possible [ioppolo 1999]. the track is 36.90 meters wide on one leg and 29.60 meters on the other; the radius of curvature of the staircases flanking the triumphal gate is about 30 meters [ioppolo 1999]. from an architectural point of view, the circus is essentially the dirt track and the central spine, a 296-meter-long narrow masonry construction. this structure includes at its ends the masonry foundations of the metae, with three marble cones above, the obelisk, and the ten basins of water of the euripus. the obelisk is now situated on the top of the fountain of the four rivers designed by bernini and located in piazza navona. the track was surrounded by steps, or stands, on which the spectators sat to watch the race; the steps are interrupted by the judges’ tribunal situated on the southern side. on the northern side is the pulvinar imperatoris, connected to the villa by a passageway. it was possible to enter the circus through three gates: the porta pompae, the porta triumphalis and the porta libitinensis. the last was used as exit for horses and riders injured during the race [ioppolo 1999]. because of the presence of vegetation, it is not possible to discern the entire shape of the circus. the majority of the functional components are still visible, however, despite their ruinous condition, so it was possible to make some hypotheses and to proceed with the 3d modeling. 3. 3d modeling 3.1 the towers the two towers of the oppidum were modeled one at a time because of the differences between the two objects. archaeological plans and historical illustrations of this part of the circus were chosen as references; thanks to the archaeological data it was possible to create a metrically correct model of the plan. once this part of the tower was finished, the object was extruded to produce the elevation, taking into account the archaeological remains, the reconstruction proposed in the historical illustrations, and the studies conducted on the masonry. the northern tower is proposed as a three story building. the ground floor, covered by a vault, has two gates; the first is the entrance door, the second leads to the carceres, while the second floor has no door but two windows. according to the available data, the model proposes a stairwell on the northern side of the tower [ioppolo 1999]. the southern tower is, as the northern one, a three story building, in which every level is covered by a vault. this tower has an entrance at the ground floor and a second one from the terrace on the first floor. because of the absence of archaeological remains of stairs or steps, the model of the southern tower is proposed without a stairwell. in both cases, the reconstruction shows a terraced roof, even if no archaeological remains can confirm this hypothesis (fig. 3). figure 3. the towers (northern tower on the left; southern tower on the right) 3.2 the carceres the archaeological remains of the carceres are too scanty to make a detailed hypothesis of restoration. the presence of some remains allows us only to recognize ten pillars that could have supported a series of arches and cross vaults. the resulting spaces were used to hold the chariots before the start of the race. the widest arch, in the middle of the structure, is hypothesized to be the porta pompae. in proposing this hypothesis, it was necessary to integrate archaeological data with some illustrations from the eighteenth century and to make comparisons with the hippodrome in leptis magna [ioppolo 1999]. the choice to cover the structure with a series of vaults was made for statical reasons (fig. 4). figure 4. carceres 3.3 the stands the external perimeter of the circus is delineated by a long wall supporting the stands from which the spectators could watch the race. this architectural element is largely damaged but it is possible to understand its organization, thanks to studies conducted on a portion on the southern side where the state of preservation allows us to recognize all the features. the stands consist of a series of twelve steps divided into two levels by a wall in opus latericium. each step is 39 cm high and 30 cm deep, so the stands could host around 10,000 spectators [ioppolo 1999]. access to the stands was provided by twin staircases flanking the doors located along the inner wall. taking into account the measurements of the steps and the width of the surviving walls it is possible to estimate the total height of the stands at around 8 meters. in the upper part of the external wall, amphoras were embedded in the masonry core in order to lighten the structure. even though the stands are in extremely poor condition and largely covered by vegetation, the remaining visible parts provided enough detail to model this important part of the circus. starting from the archaeological plans and the axonometric drawings created by ioppolo, it was possible to produce an accurate 3d model that includes the twin staircases and the embedded amphoras as well (fig. 5). figure 5. the stands 3.4 the tribunal the tribunal was modeled using as references the available archaeological plan and historical illustrations. in this case, there were many open questions about the hypothesis of restoration. the first was the presence of stairs by which to approach the terrace of the tribunal. this architectonic element is not well known, but in the model it is proposed as stairs that connect directly to the ground floor. the tribunal in its central part; the staircase is covered by a barrel vault. other unknown elements are the three doors located at the level of the exedra. the one on the left leads to a small wedge-shaped room interpreted as closet between the octagonal structure and the portico. the central door has been interpreted as passageway that connects the exedra and the quadrangular room. the third one was probably a window used to give symmetry at the façade of the tribunal (fig. 6). the last unknown part is the roof; there are no extant elements or useful comparisons enabling us to propose an accurate hypothesis, but we decided to show a roof composed of four distinct elements that cover the four different rooms. figure 6. the tribunal. 3.5 the pulvinar along the northern side of the circus, close to the second meta, is situated the pulvinar, where the emperor used to watch the race. this feature was directly connected with the palace by means of a passageway. unfortunately, the archaeological remains are too badly damaged to allow us to make a detailed hypothesis of their reconstruction. moreover, this part of the circus was largely modified during the middle ages, and as a result it is not easy to recognize the maxentian walls [ioppolo 1999]. for this reason, the 3d model here presented is largely hypothetical and is based on the historical illustrations from the sixteenth century that show the state of preservation in that period. the resulting model, according to the reconstruction proposed by bianconi fea [ioppolo 1999], presents a rectangular room covered by a terrace and preceded by the stands, with columns on the façade (fig. 7). figure 7. the pulvinar. 3.6 the spina the spina was modeled using the archaeological plan and the data collected during the survey. this element is one of the best preserved and most recognizable, so it was also possible to integrate the archaeological data with mosaics and scenes represented on reliefs from the roman period. figure 8. arrangement of the spina. 3.7 porta libitinensis the porta libitinensis is situated along the stands, close to the second meta. this gate, used as an exit for removing injured or dead horses and riders, consists of two arches connected by a vault. it is 4.44 meters wide and 6.89 meters long [ioppolo 1999]. the porta libitinensis is still visible so it was quite simple to propose a 3d model by integrating the archaeological plan with new photos and direct studies (fig. 9). figure 9. porta libitinensis 3.8 porta triumphalis the porta triumphalis is located in the middle of the eastern side and consists of two walls surmounted by a barrel vault and an attic. the structure still exists, so we could proceed with our model with confidence in the details. today the visible remains include the walls, the vault, the two stairs connecting the ground floor with the stands, and the steps leveling the slope between the track and the external ground. the only missing part is the attic but, thanks to comparison with contemporary arches, it was possible to hypnotize its likely appearance (fig. 10). figure 10. porta triumphalis. 3.9 the terrain to complete the model of the circus it was necessary to elaborate a 3d model of the landscape in which the monument stood. starting from altimetry data provided by igm (istituto geografico militare) cartography, it was possible to produce a metrically correct model of the terrain. after having imported into blender the contour lines, previously extracted from the maps, it was first necessary to convert the curves into a mesh. then, using the add-on delaunay triangulation and voronoi diagram, it was possible to generate a 3d model. to make the object more manageable, the shrinkwrap modifier was applied. this modifier works by putting on the mode a lan that automatically covers the geometry while keeping the shapes and measurements. the result is an accurate, detailed and topologically correct model of the terrain (fig. 11). figure 11. 3d model of the terrain. 4. texturing last step of the project was texturing the 3d model. because of the poor state of preservation, detailed data were not available but, thanks to previous studies, it was possible to make some hypotheses of restoration. in considering the spina, pulvinar, tribunal and porta triumphalis, we propose a marble facing. this hypothesis seems to be confirmed by comparisons with others monuments and by the finding of marble fragments. the situation of the carceres is more complicated; due to the lack of significant remains it is impossible to know for certain if there was a facing or not. nibby asserted that this feature was covered by marble, basing his hypothesis on the quantity of marble found close to the pillars [luschi 1999]. it is also possible that the only part covered by marble was the central arch, the so-called porta pompae. thus we propose two solutions: the first presents a complete marble covering (fig. 12) and the second shows the masonry exposed and only the porta pompae with a marble facing (fig. 13). figure 12. carceres completely covered. figure 13. carceres with masonry exposed. to obtain a photorealistic result, the model was textured in blender using the node editor in cycles render. this tool allows us to produce a detailed texture easily and quickly, through the use of nodes reproducing the physical effects related to the behavior of light. the same editor was used to realize the water of the euripus (fig. 14) and to texture the terrain. figure 14. euripus and statue of venus. 5. conclusion the result of this project is a metrically correct 3d model of the circus of maxentius useful in two ways. on the one hand, the model could stimulate wider reflections on other archaeological topics, such as structural aspects and decoration systems for this type of roman monument. on the other hand, this model presents a full reconstruction of the monument to a contemporary public audience. the production of a 3d model is a good opportunity to deepen the knowledge of the monument and to verify the hypotheses of restoration. moreover, the study of an accurate 3d allows us to pose new questions concerning the circus of maxentius and the entire complex. the same methodology could be applied to many projects, in order to propose new hypotheses and to improve the knowledge of the monuments under investigation. 3d models also represent a good dissemination tool. for example, our model could be used to show to the public how the monument looked during the fourth century ad. in this regard, details of the model parts have been included in an augmented app in order to provide a simple and involving way to enjoy the monument (fig. 15). figure 15. augmented reality app. 6. acknowledgements we gratefully acknowledge dr. ersila maria loreti (sovrintendenza capitolina ai beni culturali) for supporting the research. we also give special thanks to michele camicioli for his precious work on the statue’s model and to francesco iaia for creating the augmented reality app. 7. references giuseppe ioppolo. 1999. la struttura architettonica. in la villa di massenzio sulla via appia. il circo. roma: istituto nazionale di studi romani, 103-195. licia luschi. 1999. gli scavi di nibby e la decorazione dell’euripus. in la villa di massenzio sulla via appia. il circo. roma: istituto nazionale di studi romani, 197-217. giuseppina pisani sartorio. 1999. inquadramento storico. in la villa di massenzio sulla via appia. il circo. roma: istituto nazionale di studi romani, 89-100. received march 2017; revised july 2017; accepted august 2017. lessons from lidar data use in the netherlands willem beex, amsterdam, the netherlands lidar provides data from which accurate models of the natural land surface completely stripped of buildings and vegetation can be derived. interestingly for cultural heritage and archaeology, most of the data is already freely available for research. this is certainly the case in the netherlands, with the actueel hoogtemodel nederland 2, or ahn2. the density of the measured points is at least 50 centimeters, which means that the remains of structures larger than one by one meter can be detected. as a result, many unknown structures have been discovered with it. however, these excellent results have blinded many cultural heritage and archaeology practitioners to obvious mistakes when interpreting lidar data. this paper is intended to highlight best practices for the use of lidar data by cultural heritage professionals. keywords: lidar, dem, new archaeological finds sdh reference: willem beex. 2017. lessons from lidar data use in the netherlands. sdh, 1, 2, 10 pages. doi: 10.14434/sdh.v1i2.23270 1. introduction lidar is an acronym for light detection and ranging or, alternatively, laser imaging detection and ranging. modern lidar data sets are often freely available via the internet. for instance, in the netherlands the actueel hoogtemodel nederland (ahn) website makes available the most recent digital elevation model (dem) of the country (ahn.arcgisonline.nl/ahnviewer/). many other european countries have similar services. modern lidar data offer a unique opportunity to detect previously undocumented surface cultural heritage (hesse 2010.]. these data are often presented using excellent software applications, which are capable of creating beautiful images very easily. in particular, the possibility to “see beneath the vegetation” is highly appreciated. its use has produced several important and interesting woodland discoveries in recent years [creemers et al. 2011; bazelmans 2016; van der schriek 2016; meylemans et al. 2016]. however, there are several important caveats that must be considered: most lidar data are not as uniformly distributed as expected; automatic and semi-automatic classification of point data is never 100% perfect; the maximum guaranteed density of measured points determines the minimum size of features that can be detected; author's address: willem beex, beex, blankenstraat 172, 1018 sg amsterdam, the netherlands; email: info@beex.nl permission to make digital or hardcopies of part or all of this work is granted without fee according to the open access policy of sdh. © 2017 sdh open access journal gridding algorithms have limitations; and the results are often presented in such a beautiful way by modern applications that the need for field validation of the results is sometimes neglected. the aim of this paper is not only to consider these problems, but also to provide a checklist that scholars can use in applying lidar data to their research. 2. gaps in the data it can be useful to consider radar when studying lidar, as the two work in a similar manner. as with radar, a lidar beam will reflect off the first object in its path. this object is what gives a “return” and it also leaves a “shadow” behind it. it is very important to understand this, as it means that lidar can only create a dense and smooth model of the actual surface if enough laser beams can reach the ground level [gatziolis and andersen 2008]. in order to optimise the results, all lidar flights in the netherlands take place during the winter months, december 1 through march 31 [heerd et al. 2000; van der zon 2015]. this is the best period of the year for “seeing through the forest” as deciduous trees have shed their leaves and not yet grown new ones. taking advantage of the winter period will obviously not work for coniferous forests. so, what is visible from the air will entirely depend on the number of viewpoints and the density of the canopy. thus, a uniform distribution will rarely occur, as parts of the surface will not be reached by the laser [van der zon 2015]. for this reason, it is very important to investigate the actual point-cloud provided by the organization that took the measurements. all competent organizations will have this data available for research. for example, on the dutch ahn2 website all point-clouds are available for download. it is even possible to download special geotiff images that mask the areas with a substandard number of observations [van der zon 2015]. however, for a detailed analysis it is often better to visualize the point-cloud projected onto a map of the research area. several applications exist, both commercial and freeware, which are capable of this operation. usually it is best to get the program, or additional module, that can be incorporated into the existing workflow. when choosing software, it is wise to remember that lidar point-clouds contain huge numbers of individual measurements. it can therefore be worthwhile augmenting the internal memory (ram) of the computer and it is wise to test the applications with the available hardware and operating system configuration. an example from the lidar point-cloud of the dutch coniferous forest near wolfheze, shows exactly what should be considered before any further analysis (fig. 1). the points on the surface, indicated by yellow spheres for better visibility, are clearly not uniformly distributed. the vegetation, shown by the green dots, obstructed the laser in many places. this means that only larger features, like ditches and trenches, will consistently appear in the 3dmodels of an area, while smaller objects, like barrows, may not be detected. more deceptively, the untrained observer of a fully processed visualization based on lidar data may get the impression that the area has been disturbed. knowledge of the actual point-cloud should help avoid this kind of misinterpretation. figure 1. coniferous forest near wolfheze. point-cloud of the ground in yellow and of the vegetation in green. the gaps on the ground, created by the trees, are clearly visible. 3. classification of the data most modern lidar data are available in one or more files representing a classification [van der zon 2015]. this classification is mainly based on “laser returns.” this principle is based on the fact that a pulse (or “laser return”) will receive the reflection from the top of a tree or a building slightly earlier than from halfway down a tree or a building, or from the actual ground surface. other variables, like the intensity of the reflection of different kinds of materials, are also detectable [berendsen and volleberg 2007; english heritage 2010]. using a (semi-)automatic method, it is possible to classify each measurement based on all those tiny differences. in its most elaborate configuration, a lidar file can have points classified with one of up to 31 definitions [gatziolis and andersen 2008]. however, usually only eight such classes will be used, mainly to differentiate between levels in vegetation, buildings, water, ground, and unclassified data. the dutch ahn2 uses a slightly different approach. here the main division is between surfaceand other measurements. this is because it is primarily intended for water management [van der zon 2015]. in general, lidar classification is an excellent approach. however, it is important to remember that lidar classification is never 100% perfect [van der zon 2015]. it is still a semi-automatic process. this means that the software will not detect all the differences, so manual editing may be necessary. the definition of features, certainly in cultural heritage, can also be problematic. for instance, what should be done about the remains of earthworks; are they part of the ground, or are they part of a structure? a very good example can be given from the kempen region in the dutch province of noord-brabant. for undocumented reasons, two similar restored bronze age barrows have been classified in different ways. figure 2 clearly shows that one barrow (on the left) with its ring-shaped wall, has been removed from the surface-data. it was identified as a man-made addition, even though the adjacent barrow was left in the surface data. in addition, another similar structure five kilometers away (on the right) was also left as part of the surface-data. in these cases it was easy for researchers to spot these differences. however, it illustrates the danger of blindly accepting lidar classification without any further checks. figure 2. three similar bronze age barrows in the kempen region, dutch province of north-brabant. for some undocumented reason the barrow and the ring-wall on the left have been removed in the classified lidar data, but the other two remain. the site on the left is toterfout – halve mijl, zand-oerle. the barrow on the right is den zwartenberg in hoogeloon. the co-ordinates follow the dutch new amersfoort rd projection. 4. density of the data versus detected features while modern lidar sources like ahn2 have improved sampling density over earlier surveys, there is still a minimum size of feature that can be detected. it is wise to remember that even this improved sampling density can be effectively degraded by vegetation cover, like coniferous forests. the nyquist-shannon sampling theorem can be used to determine the size of feature that can be detected with any specific sampling spacing. the feature size detection threshold is an area twice the sampling distance in each direction, and a recognizable shape can only be found when the area of the feature is five times the sampling distance in each dimension [beex 2003]. this means that if your lidar data-set has a point every meter, only features with a minimum size of two by two meters will be reliably detected and only features with a minimum size of five by five meters will have the correct shape. a comparison between the older dutch ahn1 data and the new ahn2 measurements clearly shows this. ahn1 guaranteed a five-meter resolution and ahn2 a 50-centimeter resolution. figure 3 shows part of the maas-ruhr-stellung (the meuse-ruhr defence line), the german defences on the east bank of the river meuse, dating from late 1944 and early 1945 [seltmann 2006; van der schriek and beex 2017]. this elaborate trench-system is barely discernible on the ahn1 image but much finer detail is visible on the ahn2 derived image. figure 3. differences in quality between the older dutch ahn1 on the left, and the newer dutch ahn2 on the right. the red arrowsindicate the remains of the german “maas-ruhr-stellung” (the meuse-ruhr defence line). in the higher blue area,the remains of a ‘celtic field’ system are also visible. the co-ordinates follow the dutch new amersfoort rd projection. the obvious conclusion is that lidar-data should not be used to visualise anything smaller than the detection threshold. however, with powerful modern computers it can be tempting to use lower detection thresholds. indeed, it is often the case that students try a lower detection threshold value, as it gives a ‘sharper look’ to the visualization. such behavior will not only create false algorithmic artefacts but also gives a deceptive illusion of precision. this in turn may deceive other researchers, who may then draw incorrect conclusions. thus, despite the temptation, this practice should be avoided. 5. algorithms it is often forgotten that any lidar-map or 3d-model is in fact the output of an algorithm. even with a dense network of measurement points, translation into a mesh or a contour map must still occur. this is a sophisticated operation for which many alternative mathematical solutions are available. figure 4. two very different images of the same lidar data. both images show the same earthwork representing a model of a british wwii convoy ship. but in the top left picture, the algorithm especially searched for small ditches running nw-se. used in this way, it almost looks as if the site has been recently ploughed. in this case, the bottom-right image with a basic search gives a more accurate representation of the earthwork. the co-ordinates follow the dutch new amersfoort rd projection. two very different results from the same area near westelbeers, the netherlands, clearly show the implications of using different algorithms (fig. 4). the images show an earthwork built to represent a british wwii convoy ship. the site was used for target practice by german stuka dive bombers [beex 2009; van der schriek 2016]. both images were created with the same lidar data-points. in fact, even the basic algorithm used was the same; kriging with a 5 by 5 metre search-radius [cressie 1990; abramowitz and stegun 1972; surfer 13 2016]. however, in the top left image, the algorithm was configured to look for elongated structures running nw-se. as a result, the same data is processed in a different manner, and the resulting image looks as if the site has been ploughed. this example clearly shows that the operator must be careful. the computer does not know the difference, so the researcher must use his/her expertise to select the appropriate method. for instance, some algorithms are particularly adept at finding specific shapes in the landscape, such as linear features like ditches or walls, but can also produce distorted images or 3d-models with false structures in them. first, the researcher needs to have an understanding of the distribution of the original measurements, the actual landscape, and the physical nature (size and shape) of the features that are under investigation. second, proper knowledge of the limitations of the available algorithms is required [beex 2003]. if these conditions are not met, any result is in fact doubtful. 6. always check the results in the field an actual inspection of the research area may seem obvious, but cannot be emphasized too often. for example, only a survey of the terrain of the earthworks near westelbeers clearly showed why the four structures could be detected by lidar, whereas the fifth had completely vanished (fig. 5) [beex 2009; van der schriek 2016]. the fifth ship was built on a heath that has since become arable land. its location was outside the area designated as a nature reserve during the 1950’s. it was subsequently removed, along with the topsoil, as part of agricultural development. this meant there was no elevation remaining to be measured. the fourth model, while located on the heath and within the nature reserve, was in an area of fastgrowing vegetation (fig. 5/4). therefore, it was not detected when the other three earthworks were restored a decade ago. of course, there were also other contributing factors. conflict archaeology was still in its infancy [van der schriek and van der schriek 2014], so no proper archaeological survey was conducted at the time of restoration. the area was a restricted military complex during the war and so not many people knew about the structures. in any case, this example clearly shows the need for background information and actual inspection of the terrain. without this knowledge, conclusions could have been very wrong indeed. figure 5. location of earthworks (1-4) representing a british wwii convoy near westelbeers (province of noord-brabant, the netherlands.) used for target-practice by german stukabombers. the red circle indicates the spot where a fifth earthwork used to be. the co-ordinates follow the dutch new amersfoort rd projection. 7. conclusion if taken for granted, even the most beautiful lidar-images can become a source of incorrect interpretations and future mistakes. there are five important aspects of lidar-data that should always be checked and validated, before further analysis is undertaken. perhaps the best solution would be to add an additional map with each lidar-image, showing the quality of the individual fields in the documented area. but at least metadata, or a very good description of the entire process, must be available, even if the maps and models are prepared by another institution. 8. a helpful checklist researchers working with lidar-data should consider these five points: always check for the presence of gaps in the data. always check the classification of the data. always check the density of the data versus the size of the detected features. always check which algorithms and variables were used. always ground-truth the results in the field. 9. acknowledgements the author would like to thank max van der schriek, phd researcher at the vrije universiteit, for the dutch wwii examples, and bert brouwenstijn, vrije universiteit, for the poster design. 10. references m. abramowitz and i. stegun. 1972. handbook of mathematical functions. dover publications. j. bazelmans. 2016. het ahn2 en het raadsel van het toponiem bussum-fransche kamp. in archeologica naerdincklant. archeologisch tijdschrift voor het gooi en omstreken1. 11-23. https://independent.academia.edu/naerdincklant b. beex. 2009. oefenbommen op een zee van heide. vitruvius 9: 18-21. w. beex. 2003. use and abuse of digital terrain/elevation models. enter the past. the e-way into the four dimensions of cultural heritage. caa 2003. computer applications and quantitative methods in archaeology. bar international series, 1227. h. j. a. berendsen and k. p. volleberg. 2007. new prospects in geomorphological and geological mapping of the rhine-meuse delta – application of detailed digital elevation maps based on laser altimetry. netherlands journal of geosciences. geologie en mijnbouw 86 (1): 15-22. g. creemers et al. 2011. laseraltimetrie en de kartering van celtic fields in de belgische kempen: mogelijkheden en toekomstperspectieven. relicta 7: 11-36. a. c. cressie. 1990. the origins of kriging. mathematical geology 22: 239-252. english heritage. 2010. the light fantastic. using airborne lidar in archaeological survey. swindon. d. gatziolis and h. e. andersen. 2008. a guide to lidar data acquisition and processing for the forests of the pacific northwest. united states department of agriculture. http://www.fs.fed.us/pnw/pubs/pnw_gtr768.pdf r. m. van heerd et al. 2000. productspecificatie ahn 2000. rijkswaterstaat: rapportnummer: mdtgm 2000.13 r. hesse. lidar-derived local relief models – a new tool for archaeological prospection. archaeological prospection 17: 67-72. e. meylemans et al. 2016. revealing extensive protohistoric field systems through high resolution lidar data in the northern part of belgium. archäologisches korrespondenzblatt 45(2): 1-17. m. van der schriek. 2016. dutch military landscapes. heritage and archaeology on wwii conflict sites. 20th conference on cultural heritage and new technologies, vienna (chnt20). http://www.chnt.at/wp-content/uploads/ebook_chnt20_vanderschriek_2015.pdf j. van der schriekand m. van der schriek. 2014. metal detecting: friend or foe of conflict archaeology? investigation, preservation and destruction on wwii sites in the netherlands. journal of community archaeology and heritage 1(3): 228-244. m. van der schriekand and w.f.m. beex. 2017. the application of lidar-based dems on wwii conflict sites in the netherlands. accepted for publication on the journal of conflict archaeology. m. van der schriek. 2017. archaeological research and heritage management on second world war conflict sites in the netherlands. accepted for publication on the journal of conflict archaeology. march 2017; revised july 2017; accepted august 2017. data curation: how and why. a showcase with re-use scenario philipp gerth, anne sieverling and martina trognitz german archaeological institute, berlin, germany the ianus project, funded by the german research foundation (dfg), is building a digital archive and portal for archaeology and ancient studies in germany. following a 3-year phase of conceptual work, the archive and portal are now being implemented and the data center is beginning its operational work. data curation is essential for preservation of digital data and helps to detect errors, aggregate documentation, and ensure the reusability of data; in some cases, it can also add useful additional files and functionality. this paper presents the workflow of data curation based on a data collection about european vertebrate fauna. it exemplifies the different stages of processing a dataset at ianus according to the oais model from its initial submission until its final presentation on the data portal. data access and reusability can be enhanced by enrichment, in the case of the vertebrate fauna dataset, by gis integration of geographic information and reutilization of bibliography. furthermore, a data re-use scenario is presented in which the dataset has been integrated with one from another repository by using semantic web technologies. key words: data curation, data enrichment, long term preservation, data re-use, semantic web sdh reference: philipp gerth et al. 2017. data curation: how and why.a showcase with re-use scenario. sdh, 1, 2, 12pages. doi: 10.14434/sdh.v1i2.23235 1. introduction ianus is a research data center for archaeology and ancient studies in germany that provides a digital archive and a portal for data dissemination. the project is funded by the german research foundation (dfg) and is divided into two phases. the first was a three-year phase of conceptual work; in the current second part, the concepts are being implemented and the data center is beginning its operational work. the aim is to establish a reliable and sustainable data center where archaeologists as well as ancient studies researchers in general can archive their research data, and start new research projects with the data they will find in the ianus data portal [schäfer et al. 2015, pp. 131-134] the work for this article has been carried out in the project ianus, which is funded by the dfg under grant agreement no.903456 and in the ariadne project, which is funded by the european commission under the community's seventh frameworkprogramme, contract no. fp7-infra-2012-1-313193. author's address: philipp gerth, anne sieverling and martina trognitz, it department, german archaeological institute, podbielskiallee 69-71, 14195 berlin, germany; email: (philipp.gerth, anne.sieverling,martina.trognitz)@dainst.de permission to make digital or hardcopies of part or all of this work is granted without fee according to the open access policy of sdh. © 2017 sdh open access journal recently the data portal was launched to present the first datasets from the digital archive (available at http://www.ianus-fdz.de/datenportal/). the homepage of the portal gives an overview of the available datasets, showing a preview picture and a snippet of the project description. each of the actual datasets has its own homepage, providing the project overview, and two subpages with rich metadata and the files available for download. figure 1. screenshot of the data portal with the homepage of a dataset. information about the data provider(s), a map, the associated institution holding the copyright, the license conditions (e.g. cc-by or cc-by-sa), the digital object identifier doi [trognitz 2013] and a recommended citation can be found on the left sidebar of the dataset pages (fig. 1). the homepage of a dataset displays the abstract of the project, the description of the dataset, selected publications, a statement of the data provider and relevant keywords. the subpage metadata presents the keywords concerning subject, content, localization, chronology and method, related publications, and statistical information about the files. before viewing the data page, the terms of use have to be accepted. this page displays all available files of the project, which are organized in directories. by clicking on the folders on the left side in the box named project structure, a detailed overview of the files is shown and the files can be previewed or downloaded. before the download starts, a window with the license conditions informs the user about the terms of re-use. in some datasets, an additional zip file is provided to allow downloading the whole dataset at once. before a dataset can be presented on the portal data curation is needed. the curation process begins with a systematic review of the dataset. the review provides the basis for converting files into appropriate formats for long-term preservation, preparing documentation and metadata, detecting possible errors, and other measures to ensure the preservation and reusability of digital data. in some cases, further functionality and additional files can be created to enrich the dataset during the process of data curation. we present the workflow of data curation based on a data collection about european vertebrate fauna (available at http://dx.doi.org/10.13149/001.mcus7z-2), as well as its enrichment, and showcase a possible re-use scenario of this dataset. 2. project and dataset description the dataset "european vertebrate fauna" was produced in a project in the 1990s by three scientists at different german institutions: norbert benecke (german archaeological institute), angela von den driesch (university munich) and dirk heinrich (university of kiel). in the project, information about european vertebrate fauna was collected and its development from the late pleistocene until the middle ages analyzed, covering a time span of more than 10,000 years. no new animal bones were gathered and analyzed, but the data about all already published animal remains across europe were entered into a dbase-database. in 4500 publications, over 8200 find spots and 100 different species were documented. on the basis of this huge compilation of data the researchers investigated changes in skeletons, habitat, human-animal relations and many other aspects. see for example [crees et al. 2016; sommer et al. 2014] among other project publications. all parts of the project, including the produced data, were excellently documented and even the structure of the database was published [benecke 1999, pp. 152-154]. but subsequently, after later examination of the data by different scientists, the database was not used anymore. instead, the data was exported into tables, which were used and subsequently changed. this is the reason why the database does not represent the actual state of progress any longer. therefore, ianus did not receive a database, but a dataset containing the different exported and updated tables and files. these files comprise the largest part of the dataset, and their organization is based on the original structure of the database, consisting of the topics catalogue, species, measurements and literature. the structure of the data sheets, their connection and content as well as the abbreviations, are explained in a detailed readme file that helps in understanding the dataset and curating it in a proper way [benecke et al. 2016]. 3. data curation the curatorial work was carried out and documented according to the open archival information system oais https://public.ccsds.org/pubs/650x0m2.pdf [ccsds 2012]. the submission information package (sip) of the data provider was transformed into a valid archival information package (aip) and for the data portal into a dissemination information package (dip) (fig. 2). figure 2. workflow of data submission, curation and dissemination according to the oais model. 3.1 file renaming and conversions to generate the aip and dip a curation strategy based on ianus' it-recommendations [ianus 2016] was defined and the following file conversions were executed: some of the files and folders had umlauts and space characters that were changed with bulk rename utility (available at http://www.bulkrenameutility.co.uk/main_intro.php). for the aip, doc-files were converted to docx. this was done with the tool doc2docx (available at http://www.er-ef.net/doc2docx.html). for digital preservation and especially for dissemination, docx-files were also saved as pdf/a-1, using adobe acrobat x. format validation of the pdf/a-1 files was additionally executed with verapdf (available at http://verapdf.org/). the tables had to be converted from xls to xlsx for the aip as well as for the dip. additionally, a conversion from xlsx to csv was carried out. for the batch-processing from xlsx to csv bytescout spreadsheet was used (available at https://bytescout.com/). all xml-based files were validated with the open xml sdk 2.0 productivity tool of microsoft (available at https://www.microsoft.com/en-us/download/details.aspx?id=5124). for future validation, it is planned to test other, non-proprietary tools and integrate them into the data curation workflow. 3.2 3.2 file reorganization the main data provider, prof. dr. norbert benecke, allowed us to change the structure of the folders and files as well as their names; therefore, we could restructure the file tree. the folder containing all readme files was deleted and the files were moved to the respective other folders to keep them with the contents they describe (fig. 3). in the sip, the folder "publications" contained a single file in docx format with a list of 13 publications that were published during the lifespan of the european vertebrate fauna project. this list was included in the project metadata and the file was converted to pdf/a. for the resulting aip and dip the folder was deleted and the file moved to the uppermost level of the whole dataset [benecke et al. 2016]. since the dataset does not contain a database, as explained above, the names of the folders were changed manually from 'database' to 'files' so as to not confuse future users. for example, the folder 'database countries' was renamed to 'catalogue files' and 'database species' to 'fauna files' (fig. 3). figure 3. changes and enrichmentof the dataset from sip to aip. 4. enrichment of the data while restructuring and converting the dataset, ideas came up to enrich the dataset in order to facilitate re-use. two additional means for an easier access were generated: one for gis and one for reference management. 4.1 4.1 gis integration to prepare gis integration, a single file containing all geographic information from the 35 files of the folder "catalogue files" was created [benecke et al. 2016]. this file was then imported into the open source desktop geographic information system q-gis (available at http://www.qgis.org/de/site/). import and display of the find-spots data revealed that the coordinates in the tables needed revision. the coordinates were expressed in degrees, minutes, and seconds (dms) but used the notation of decimal degrees (e.g. "15,59" instead of 15° 59'). therefore, the coordinates were converted [linoff 2015 , 148] into proper decimal degrees (e.g. 15° 59 to 15,9833) for use in q-gis (fig. 4). errors in some coordinates were corrected manually and documented in a supplementary readme file. all resulting files (cpg, dbf, geojson, prj, qpi, shp, shx) generated from q-gis were also stored and are provided together with the original data for re-use [benecke et al. 2016 ]. figure 4. distribution map of the european vertebrate fauna find-spots. 4.2 bibliographic information in addition to converting the original doc-files containing bibliographic references from the folder "literature" into its respective archival formats (docx and pdf/a), it was decided to add further functionality by aggregating the references dispersed across 22 files into one single bibliographic file in bibtex format. this allows for integration of the information into reference management software. for this purpose, the documents were saved manually as plain text encoded in utf-8. with a python script, the text files were scanned for information about author, year, and title, and converted into a bib-file. the original reference was kept and added into the note field. during this process missing information, wrong punctuation and similar errors were manually corrected in the text files. in the resulting bib-file parts where data providers had marked missing information with question marks or 'xxx' were revised and completed when possible; duplicates were also removed. a single file containing the whole bibliography of the dataset in a standardized format increases the re-use potential of this information [benecke et al. 2016 ]. 5. data re-use on the european level in order to demonstrate the potential of integrating different scientific datasets in the domain of archaeological science, two heterogeneous zooarchaeology datasets, one hosted by ianus and one by the archaeological data service in york/uk, were combined by using semantic web technologies. researchers of zooarchaeology have a long tradition of sharing their datasets and articles in community portals like bone commons http://alexandriaarchive.org/bonecommons/, the zooarchaeology social network, the zooarch email discussion list https://www.jiscmail.ac.uk/cgibin/webadmin?a0=zooarch and other platforms [kansa and deblauw 2011]. this discipline serves as a useful case study, as the terminology used is highly standardized, its materials and methodologies are global in scope and many research questions are only answerable by taking into account multiple datasets. the integration of the two datasets was undertaken as part of the eu project ariadne, which brings together and integrates existing archaeological data infrastructures with the goal of offering researchers unified search and discovery facilities over a wide range of distributed datasets [ariadne 2016]. besides the already described european vertebrate fauna, a second dataset was chosen from the project "a review of animal bone evidence from southern england," funded by english heritage and aimed at reviewing animal bone evidence from the late bronze age throughout late iron age in southern england. the regional review report [hambleton 2008], for which this database serves as an open accessible online appendix, provides a synthetic review of published faunal assemblages. consequently, analyses (e.g. ageing, butchery, biometric data) focus on the exploitation and deposition of sheep, cattle, pig, horse and dog. other taxa (e.g. wild mammals, birds, fish and amphibians) are also discussed. the information in the database, published by the archaeological data service [hambleton 2009], is based on 108 site reports, which correspond to excavations at 101 separate monument locations and 154 distinct 'assemblage' records for faunal assemblages. additionally, bibliographic references for all zooarchaeological reports reviewed are listed in the database. the two datasets were mapped to the common super-classes and relationships of the ontology cidoc-crm to relate both relational data models to a common standard. to guide these mappings, the tool 3m mapping memory manager (available at www.ics.forth.gr/isl/3m/) was used. the knowledge graph derived from the mapping and alignment of the two datasets is depicted in fig. 5. to overcome the language barrier between the german and english datasets a common standard was introduced, by using the encyclopedia of life (eol, available at http://eol.org/), which provides biological definitions of species in a classification tree. as the thesaurus is not available in semantic web formats, the terms used were described with rdf/xml. figure 5. graphic representation of the cidoc-crm mapped datasets in the "animal remains" scenario the integration of the two zooarchaeology datasets leads to an environment where users are able to specify queries that run on a common aggregated repository and combine the results coming from the different datasets (fig. 6). the first example in the figure is a species-centric query, where all sites with horse remains are shown with their bone assemblages. for a researcher who is interested in the distribution of a specific species the literature references could be a welcome starting point. the second example illustrates possible basic statistical analyses with the help of the query language sparql over a common mapped dataset. a frequency distribution of selected species in the archaeological contexts is depicted in the bar chart. this approach highlights the high potential of re-using datasets and how new results can be gained without having to invest in the costly acquisition of new research data. figure 6. queries on the two integrated zooarchaeology datasets: a sparql query on the left side and visualization of the result on the right side 6. discussion this paper presented the data portal of the ianus project and addressed some details of the data curation workflow according to the oais standard. a zooarchaeological dataset was used to exemplify the steps involved in the creation of a dataset suited for long-term preservation. two cases of enriching the data were presented, to allow for gis and reference management integration. during this process, some errors in the dataset were detected, corrected and documented. this led to a significant improvement of data reusability without violating the archiving principles of the immutability and authenticity of data, as new datasets were created while still preserving the original ones. finally, we have shown a solution for the integration of heterogeneous datasets using semantic web technologies. these aggregation activities point to a very promising direction and could incorporate all archaeological datasets stored in different data centers into an integrated knowledge graph to provide access to a huge amount of comparable data. this could enable users to answer research questions across heterogeneous resources through gaining statistically more reliable results without the acquisition of new data. but to use this potential, willingness of the researchers to make their data openly available in open and standardized formats is necessary. 7. references ariadne advanced research infrastructure for archaeological dataset networking in europe. 2016. building a research infrastructure for digital archaeology in europe. ariadne booklet. http://www.ariadne-infrastructure.eu/about n. benecke. 1999. the project "the holocene history of the european vertebrate fauna." in n. benecke, ed. 1999. the holocene history of the european vertebrate fauna. archã¤ologie in eurasien 6, rahden/westfalen: marie leidorf verlag, 151-161. n. benecke et al. 2016. holozã¤ngeschichte der tierwelt europas [data-set], berlin: ianus. http://dx.doi.org/10.13149/001.mcus7z-2. bone commons, http://alexandriaarchive.org/bonecommons/ ccsds consultative committee for space data systems. 2012. reference model for an open archival information system (oais). recommended practice ccsds 650.0-m-2. magenta book, washington, dc: ccsds secretariat. https://public.ccsds.org/pubs/650x0m2.pdf j. j. crees et al. 2016. millennial-scale faunal record reveals differential resilience of european large mammals to human impacts across the holocene. proceedings of the royal society b, 283: 20152152. http://dx.doi.org/10.1098/rspb.2015.2152 e. hambleton. 2008. review of middle bronze age late iron age faunal assemblages from southern britain. research department report series number 71-2008. english heritage. e. hambleton. 2009. a review of animal bone evidence from southern england [data-set]. york: archaeology data service [distributor]. http://dx.doi.org/10.5284/1000102 ianus, ed. 2016. it-empfehlungen fuìˆr den nachhaltigen umgang mit digitalen daten in den altertumswissenschaften [version 1.0]. http://dx.doi.org/10.13149/000.y47clt-t s. w. kansa and f. deblauwe. 2011. user-generated content in zooarchaeology exploring the "middle space" of scholarly communication. in e. kansa et al., eds. archaeology 2.0 new approaches to communication and collaboration. los angeles: cotson institute of archaeology press, 185-206. g. linoff. 2015. data analysis using sql and excel. new york: wiley. f. schã¤fer et al. 2015. forschungsrohdaten fuìˆr die altertumswissenschaften – eine kurze bilanz der aktuellen situation von open data in deutschland. archã¤ologische informationen, 38, 125-136. http://dx.doi.org/10.11588/ai.2015.1.26156 r. s. sommer et al. 2014. range dynamics of the reindeer in europe during the last 25,000 years. journal of biogeography 41, 298-306. http://dx.doi.org/doi:10.1111/jbi.12193. m. trognitz. 2013. abschlussbericht testbed "persistent identifiers." http://www.ianus-fdz.de/projects/ergebnisse/wiki zooarch. email discussion list. https://www.jiscmail.ac.uk/cgi-bin/webadmin?a0=zooarch received march 2017; revised july 2017; accepted august 2017. reconstructing vindonissa as a living document: a case-study of digital reconstruction for output to pre-rendered and real-time applications jonas christen, ikonaut gmbh, switzerland / zurich university of the arts, switzerland the legion camp "vindonissa" in switzerland is considered one of the most important roman excavation sites north of the alps. research there has been conducted for over a century and reconstructive drawings have always been a way to showcase scientific progress. the earliest of these drawings date back to 1909. in 2015, the local archaeological service decided that a new series of illustrations should be produced. topographical data, archaeological plans, as well as building profiles provided by experts were the basis for these illustrations. future uses of the same model could include animations or real-time applications for augmented and virtual reality. in order to avoid remodeling for these uses, the whole camp and its surrounding settlements had to be constructed as adaptive and flexible 3d models. the requirements on a model for still rendering are very different from those on real-time renderings in game engines. also, the reasonable level of detail for images on eye level is very different from the level of detail for bird's-eye panorama. therefore, the main challenge was to develop an efficient workflow for multiple output media and different points of view. while some of the proposed methods proved to facilitate the process without adding time needed for modeling, there still remain a lot of open questions. a "living document" should allow all stakeholders (excavators, archaeologists, historians, and illustrators) to access and change information in all stages of the process. this still has to be considered a long-term goal and is a problem far from being solved. key words: computer-visualization, 3d reconstruction, 3d modeling, virtual reality, roman legion camp sdh reference: jonas christen 2017. reconstructing vindonissa as a living document: a case-study of digital reconstruction for output to pre-rendered and real-time applications. sdh, 1, 2, 396-408. doi: 10.14434/sdh.v1i2.23280 1. introduction the company ikonaut gmbh works closely with experts in order to accurately communicate scientific content either to other experts or to a broader public. its members are trained scientific illustrators, aware of the trade-offs and dangers that are implicated by a visual reduction in complexity of content or the depiction of details that have no adequate scientific justification. in the case of the new digital visualizations of the roman legion camp of vindonissa, the client author"s address: jonas christen, zurich university of the arts, department of design, pfingstweidstrasse 96 ch-8005 zürich, switzerland; email: jonas.christen@zhdk.ch permission to make digital or hardcopies of part or all of this work is granted without fee according to the open access policy of sdh. © 2017 sdh open access journal (the archaeological service of the canton of aargau in switzerland) decided deliberately to move away from the graphic style of previous visualizations and asked for three more detailed, photorealistic images. the leader of the vindonissa excavations, dr. jürgen trumm, very closely oversaw the process of the reconstruction and was responsible for the scientific accuracy of the buildings and the environment depicted. early in the process, the client and the company decided that they would attach importance to the reusability of the emerging model. it should not only be usable for this specific use-case but adaptable for future known and unknown uses with manageable investment of time and money. 2. earlier visualizations of vindonissa to understand the need for easily adaptable models, a short overview of the camp"s history and previous visualizations reconstructing it are desirable. inscriptions and coins were discovered in the area as far back as the 14th century, but excavations only started toward the end of the 19th century [trumm 2015]. the earliest reconstruction drawings of vindonissa date to 1909 [trumm 2016] but they are limited to individual buildings whose remains give a clear indication to their original use (e.g. the amphitheater). the first attempt at a partial reconstruction of the camp dates to 1939 and was done by w. eichenberger (fig. 1a). thanks to grey areas around the reconstructed part it clearly separates the better-known parts from areas that had not been excavated at that time. in 1945, architect h. herzig drew the first full reconstruction of the camp (fig. 1b). while the general layout and placement of the structures hold up to today"s scientific findings, various detailed buildings are now thought to have looked very different, an obvious example being the factory-like houses with chimneys. around a decade later, a colorful illustration with legionnaires in the foreground was made. it supposedly was published in a school book in the 1950s although no references of the author or the publication could be found (fig. 2a). figure 1. early visualizations of the roman legion camp a) only known structures are depicted [eichenberger 1939]; b) the first attempt at reconstructing the whole camp [herzig 1945] it was not until 2001 that a new attempt at reconstructing the settlement was made by a. haltinner (fig. 2b). it left a lot of interpretation open to the viewer and did not go into detail where the underlying information was not clear enough. quite contrary to that image, the next reconstruction from 2006 is very detailed and lively (fig. 3). it attempts to capture the hustle and bustle that a citylike settlement with around 10,000 inhabitants must have had. five years later, a 3d-model was printed and installed as a central piece of the permanent exhibition at the site"s museum (fig. 4). it has almost the same scientific basis as the previous image, but instead of focusing on the inhabitants of the camp, it depicts the buildings with a much higher level of detail. figure 2. more than fifty years apart but not so different: a) northern view by an unknown artist in the 1950s [trumm 2016]; b) graphic approach at the start of the 21st century [haltinner 2001] figure 3. showing a lively interpretation of the camp and its surroundings [atelier bunter hund 2006] figure 4. new media allowing for new forms of representations: 3d-printed model in the local museum [flück 2011] 3. challenges of producing for multiple formats the model mentioned above was available at the start of this project in a converted format, but it had originally been constructed for 3d printing. this meant that some details were missing and that, because of its polygon structure, it could not be included it in the texturing workflow. instead of adapting the existing models to new needs, the decision was made to start again from scratch. in order to avoid remodeling for future uses, the model should be easily adaptable. the following section is an overview of present and future challenges and proposed solutions. while some of these solutions are already common place in the workflow, others are more difficult to integrate or even mutually exclusive. due to time and budget constraints, it was not possible to adhere to the proposed ideal solution. as a compromise, individual buildings were produced with a relatively low polygon count but with high quality textures. as the textures had to be painted for every building, these textures could be produced in a high resolution almost without any additional work. the number of template buildings was to be kept as low as possible to allow them to be replaced easily. the archaeological service of the canton aargau has a general plan which combines all excavation findings of the site. but the purpose of this plan was to give a general overview and visual summary, so it has been generated manually in adobe illustrator and does not include geodata. in 2017, work has started to generate a complete plan for cad/gis environments within a few years. this would facilitate the inclusion of georeferenced data in 3d models as well as the exchange between scientists and 3d artists. in the project discussed in this paper however, it was not possible to georeference the archaeological findings because of the lack of a general plan. that and other limitations mean that the "living document" is still a long-term goal and not something that has already been achieved. nevertheless, this document will shed a light on some considerations that should help working towards a real "living" version of vindonissa. table 1. overview of the challenges faced in this project. 4. a workflow for long-term effectiveness the basic information for the modeling (approximate number and size of windows, inclination of the roofs etc.) could be obtained from the model done in 2011. the appearance of the buildings inside the legion camp was therefore already set and only needed minor updates according to scientific findings. the discussion between the modeler and the expert was focused much more on the number and positioning of houses in the vici: results of recent excavations indicate that the settlements around the legion camp were much smaller than previously thought. windows and doors were inserted into the basic models as boolean objects using cinema4d"s mograph module. this allows one to change the number and positioning for each model while including a certain degree of randomness to make the model appear livelier. the roofs were modeled as separate objects so that they could be easily transferred between the template models. it would have been possible to use only as few as 16 polygons per roof, but they were instead cut into pieces of around 1 meter, as shown in fig. 5. figure 5. buildings are cut into polygons of approximately even size for deformation. that allowed us to use a deformer which changes the location of each point by a few centimeters according to a black and white image, again with the goal of making the model look more life-like by avoiding totally straight forms. all models are constructed so that they extend around one meter into the ground. this way their instances can be placed on uneven ground without risking that parts of the building appear to hover over the ground. figure 6. the painted texture as it looks after exporting from 3d-coat. figure 7. variations can be obtained by turning layers on and off: a) recently built version; b) weathered and damaged version of the same building the texturing workflow required working back and forth between applications. after a review of the suggested textures by the scientific expert, the models were exported from cinema4d in obj format. from there, they were imported into 3d-coat, which has very flexible uv-mesh and painting tools. the color information was applied in multiple layers, starting with the basic roof tiling and plaster before filling in a weathered and damaged look. these layers could be exported as separate images or combined in an editable photoshop file like fig. 6, including a normal map. color correction and variation according to different ages of buildings can be applied easily as shown in fig. 7. the textures were created in a resolution of 4096x4096 pixels, much higher than needed in the first set of renderings but again with the goal of using them in future close-ups. to keep the modeling and texturing process as fast as possible, it was necessary to identify a group of base models that could be used multiple times within the legion camp. the 23 models in fig. 8 were repeatedly used in around 95% of the buildings in the scene. the other 5 % are special buildings such as the surrounding wall, the aqueduct, the amphitheater, etc. the base models could even be used for some special buildings like the baths in fig. 9, with the reservation that for a closer look the model would have to be redone, as it lacks the detail of manually-made models. each porticus in the scene uses the same original section of a porticus shown in fig. 10 which is modeled with only 32 polygons and therefore saves much-needed computing power. albeit these considerations, it is still necessary to convert the whole scene with a few manual steps before exporting it to real-time engines like unity. figure 8. a repository of base models makes up most of the camp. figure 9. even complex buildings can be composed of the base models as long as they are not viewed up close. figure 10. the original simple porticus is used throughout the whole scene. the terrain is an area that required many compromises in this project owing to the different characteristics of working in the 3d modeling software cinema4d and in the real-time engine unity. in the modeling tool, the whole area is viewed from above and does not require a high polygon count or image resolution. in the real-time software on the other hand, the user will be standing on the ground and expect the terrain to simulate realistic bumps in elevation as well as high resolution textures. for the scope of this project, it was necessary to limit the resolution of the polygon mesh and the textures and work with a proven re-projection method shown in fig. 11. a raw rendering of the terrain was imported to photoshop and painted there according to details provided by the scientific expert. the resulting flat image was then camera-projected onto the terrain mesh which resulted in a highly variable look that allowed to change lighting settings dynamically. another obvious difference between pre-rendered images and real-time renderings is that, in the first, it is possible to apply color correction, fix mistakes and even apply atmospheric details such as smoke and clouds. for the application in a game engine all these effects need to be processed in realtime, which uses a lot of computing power and is therefore sparsely used in low-budget simulations. figure 11. the landscaping process: a) the visualized terrain data; b) painted texture in photoshop without lighting or shadows; c) texture is reapplied onto the terrain through camera mapping figure 12. the raw rendering out of cinema4d before applying atmospheric details in photoshop. 5. challenges and lessons learned working efficiently for one primary output medium while keeping another in mind led to considerable challenges. the base models of houses, for example, had one initial size that had to be adapted for multiple uses in and around the legion camp. in extreme cases, the models changed from 0.5 up to 2 times of their original size. in the pre-rendered situation with a bird"s eye view the resulting changes in texture and window size are not visible but if viewed from eye height these errors are noticeable and very distracting. more models would therefore have to be made if the whole camp should be made accessible in a close-up. some of the decisions taken at the start of the project were later considered obsolete and should not be repeated in future projects. the windows, for example, are just one polygon, which was then extruded to keep the polygon count low. their appearance is not satisfactory in close-up and it would not have made a big difference in work or polygon count to include a bevel (curvature) to the window frames. 6. current status and outlook the still renderings shown in figs. 13, 14, and 15 were first published in the annual report of "gesellschaft pro vindonissa" [trumm 2016]. as there is currently no mandate or planned use for the real-time application, the execution of a proof of concept was pursued as an internal project at ikonaut. further development of specific questions regarding movement and narrative interaction were posted as a student project at the university of applied sciences and arts northwestern switzerland fhnw. a team of two students then created a framework shown in fig. 16 that allows objects to be easily imported into an environment that has built-in functionality for a common form of teleportation locomotion as well as audio guide stations that are activated by the user. the next step will be to import the model of the whole legion camp into the real-time application and decide on one or multiple points of interest where a story can be told and where accessories and environmental effects will be added. figure 13. final image, overview of the camp and its surrounding landscape in the 1st century ad figure 14. final image, close-up of the camp in the 1st century ad figure 15. final image, close-up of the settlement post military use in the 2nd century ad in the process of transferring models from cinema4d to unity it was noted that there is still a lot of manual labor involved for every object. even when including the lessons learnt, the process is not satisfactory, and there is still a long way to go until it can be called a true "living document". in an idealized version of such a document, everyone involved in the production of knowledge in the project would enter additional data in the same file. all programs would draw their information from this file, from cad to 3d and real-time applications. such a solution would decrease the time needed for exchange and help to keep everyone aligned, from excavation personnel to archaeologists to 3dartists and even the public. academic work has been done previously on different aspects of this bigger end-goal. working group 5 of cosch project exemplifies an international multidisciplinary approach to reach solutions for some of these issues [cosch action 2009]. while it will not be realistic for any one institution or company to solve the whole problem, it will make sense to achieve the long-term goal via small steps. an example for such a small step is the plan to post another student project at fhnw outlining a solution for the automatization of the model exchange process. so-called "instances" from cinema4d should be automatically transferred to "prefabs" in unity4d. while their function is the same (have one original model cloned in order to safe processing power and allow for easy updatability) their practical application and the way the program handles them is different in both applications. also, during the process the instances from cinema4d will have to retain their name, which is given by the archaeologist"s naming convention. another goal is to include documentation of background information and the way decisions were taken in the project in order to comply with principle 4 of the london charter [london charter group 2009]. one of the solutions proposed at chnt in earlier years, e.g. [apollonio 2016] or [hauck and kuroczynski 2015] could be applied. for a private company, it would be preferable to have a minimalistic documentation standard that could be expanded upon later. as the documentation process is (not yet) a part of the project outline and therefore not paid, it would only be feasible to include documentation if the workflow needed is not too time consuming. figure 16. first look at the virtual reality implementation in unity. a navigation menu with available locations is displayed. the user teleports to new locations using hand controllers. 7. conclusion this paper presented a method for the 3d-modeling of a roman legion camp, planning for output in multiple formats from pre-rendered to real-time rendering. some of the ideas put forward could be applied universally when working on a large-scale 3d model of a settlement where a lot of buildings can be classified into types thanks to their repetitive nature. some challenges inherent to the goal of producing for pre-rendered and real-time renderings at the same time were mentioned. most notably, these challenges include the level of detail for modeling the buildings and the terrain. in pre-rendered scenarios from a bird"s eye view, the buildings can be modelled with few polygons and low-resolution textures while the terrain can be painted in an external software. including 3d vegetation is computationally expensive and not necessary. on the contrary, in real-time renderings from an eye-level perspective, house models need more detail in modeling and textures. the terrain must include 3d vegetation, if a realistic look is desired. combining these elements is only possible when extra time and budget are available, though small preventive measures, some of them outlined in this paper, can help to keep later development time down. 8. references atelier bunter hund. 2006. leben im aargau. andrea john & felix boller. inbeat gutshause. buchs lehrmittelverlag des kantons aargau, 36f. cosch action. 2017. working group 3: algorithms and procedures. http://cosch.info/wg3. fabrizio apollonio. 2016. classification schemes and model validation of 3d digital reconstruction process. in international conference on cultural heritage and new technologies, vienna, 2015 – proceedings. http://www.chnt.at/wp-content/uploads/ebook_chnt20_apollonio_2015.pdf. hans herzig. 1946/1947. versuch einer rekonstruktion der tore, türme und umwallung von vindonissa. in gesellschaft pro vindonissa, ed. jubiläumsbericht 1946/47. brugg: vindonissamuseum, 68. jürgen trumm & ikonaut. 2016. vindonissa aus der vogelschau neue und alte blicke auf das römische windisch. in gesellschaft pro vindonissa, ed. jahresbericht 2015. brugg: vindonissa-museum, 11. jürgen trumm. 2015. vindonissa. in historisches lexikon der schweiz (hls). http://www.hls-dhs-dss.ch/textes/d/d12287.php karin meier-riva. 2001. unter der erde. vom leben und sterben in vindonissa. 7. london charter group. 2009. the london charter for the computer-based visualization of cultural heritage. http://www.londoncharter.org/fileadmin/templates/main/docs/london_charter_2_1_en.pdf matthias flück. 2011. the printed legionary camp of vindonissa – the development of a new digital and physical model of vindonissa. in international conference on cultural heritage and new technologies, vienna, 2010. http://www.chnt.at/wp-content/uploads/ebook_ws15_part3_sessions1.pdf oliver hauck, piotr kuroczynski. 2015. cultural heritage markup language – how to record and preserve 3d assets of digital reconstruction. in international conference on cultural heritage and new technologies, vienna, 2014. http://www.chnt.at/wp-content/uploads/ebook_chnt20_hauck_kuroczynski_2015.pdf walter eichenberger. 1938/39. frontispiece. in gesellschaft pro vindonissa, ed. jahresbericht 1938/39. brugg: buchdruckerei effingerhof ag, frontispiece. received january 2017; revised july 2017; accepted august 2017. archaeological excavation and documentation of kafir kala fortress tomoyuki usami, graduate university for advanced studies, japan alisher begmatov, kyoto university, japan takao uno, tezukayama university, japan amridin berdimurodov, the institute of archaeology of the academy of sciences of the republic of uzbekistan, uzbekistan gennadiy bogomolov, the institute of archaeology of the academy of sciences of the republic of uzbekistan, uzbekistan the site of kafir kala is located in the south-east of modern samarkand city, uzbekistan, and well-known for its unique seals and other artifacts. since 2013, the japanese-uzbek joint archaeological expedition has been carrying out excavations and digital surveys on this site, mainly focusing on the fortress area. this paper is a preliminary presentation of newly excavated pre-islamic structures and 3d models, contributing to a better understanding of the urban settlement history of pre-islamic samarkand, as well as other regions of central asia. key words: kafir kala, fortress, fire layer,laser scanning technology, 3d models sdh reference: tomoyuki usami et al. 2017. archaeological excavation and documentation of kafir kala fortress. sdh, 1, 2, 12 pages. doi: 10.14434/sdh.v1i2.23267 1. introduction central asia is generally characterized by its vast expanse of steppes and deserts with an arid climate. the oasis of samarkand, however, is rich in water, and has fertile lands suitable for agriculture and pasturage, owing to the zarafshan river that flows through the region. it is perhaps one of the main reasons that several important and populous urban settlements along the silk road emerged in this oasis. only a few of these urban settlements have been thoroughly studied. the site of kafir kala, however, was only sporadically excavated and studied until the early 2000s. author's address: tomoyuki usami, department of japanese studies, the graduate university for advanced studies, japan; email: usami.tm@gmail.com;alisher begmatov, dept. of letters, kyoto university, japan; email: alisher.begmatov@gmail.com; takao uno, department of culture and creativity, tezukayama university, japan; email: tsuno@tezukayama-u.ac.jp; amridin berdimurodov, the institute of archaeology of the academy of sciences of the republic of uzbekistan, uzbekistan; gennadiy bogomolov, the institute of archaeology of the academy of sciences of the republic of uzbekistan, uzbekistan. permission to make digital or hardcopies of part or all of this work is granted without fee according to the open access policy of sdh. © 2017 sdh open access journal between 2001 and 2008, the italo-uzbek expedition carried out excavations mainly on the citadel of kafir kala, a lofty and well-arranged fortress located in the south-east of samarkand. during the excavations, the expedition unearthed various artifacts of islamic and pre-islamic periods. the expedition also identified the presence of the layer where a severe fire occurred. the fire presumably broke out during the conquest of central asia by the arabs. from the fire layer, numerous seals were discovered. this indicated that the citadel of kafir kala may have functioned as an important administrative center. the italouzbek expedition identified the structure and main functions of the citadel after the occupation of the site. however, the structure and function of the citadel before the occupation remained unclear due to the discontinuation of the excavations. only a small part of the fire layer in the south of the citadel was excavated. in 2013, the japanese-uzbek1 expedition launched its archaeological research mainly to disclose the structure of the citadel below the fire layer, and to identify the function of the citadel before the fire occurred. along with excavations, the expedition also carried out digital documentation. the present paper aims at presenting the results of excavations on kafir kala conducted in recent years, as well as the production of 3d modeling of newly unearthed structures below the fire layer of the fortress. 1the japanese-uzbek expedition began its collaborative research in the oasis of samarkand in 2005. in 2007-2012, the expedition carried out a number of excavations on the citadel, inner city and suburb of the archaeological site dabusia; this is a large urban site (approx. 80 ha) situated halfway between samarkand and bukhara, two core cities along the silk road. the expedition revealed details about the emergence of the settlement and its later development as a major economic center (uno and berdimurodov 2013). the members of the uzbek part of the japanese-uzbek expedition were mostly the same as in the italo-uzbek expedition. it was at the initiation of the uzbek part that the archaeological research on kafir-kala was organized. the research backgound the site of kafir kala is located approximately 12 km south-east of the famous site afrasiab (old samarkand) along the dargom river, which branches off from the upper-middle zarafshan river (figure 1). its location perhaps played a strategic role in connecting samarkand with the eastern areas, such as penjikent, and the southern part, which includes the key cities of the modern kashkadarya region, as well as the cities in the modern surkhandarya region or even further south. this site consists of three main parts: citadel, inner city (shakhristan) and suburb (rabad). the citadel, which occupies the central area of the kafir-kala complex, is a considerably high square fortress (about 20m high and 75 × 75m at the base) surrounded by three towers in the northeast and southwest (figure 2). kafir-kala (arabic: كافِرkāfir "unbeliever", ﺔﻌﻠﻗqal'a "fortress") meaning "fortress of unbelievers" is a toponym which replaced the original name of the site sometime after central asia shifted to islam. this kind of replacement of site names is observed elsewhere in central asia, thus causing serious difficulties for researchers in identifying the site from written sources. nevertheless, grenet and de la vaissière (2002) proposed that kafir kala may as well be identified as rewdat2 which was noted by ibn-hawqal, a medieval historian, as the residence of the king of fergana, located one farsākh (i.e., ca. 6 km) south of samarkand. this was convincing because kafir kala is the biggest archaeological site in the southern vicinity of samarkand. however, coin finds unearthed from the fire layer of the site indicate that kafir kala had already been occupied once, a few years earlier than the arrival of the king of fergana to form an alliance with the ikhshid (king) of sogd against the arabs. thus, we cannot precisely identify the site from the written sources. 2rewdat was traditionally identified as tali barzu, located about 6 km south of samarkand. this site was systematically excavated by grigorjev. figure 1. location map of kafir kala on a landsat tm satellite image. prior studies on the site kafir kala go back to the late 1920s, and they provide valuable primary information about this site. the first topographical map of this site was made by masson (1928). the first systematic excavations were carried out by grigorjev (grigorjev 1941, 1946), an who mainly focused on the suburb area of kafir kala and unearthed craft materials, including a pottery kiln. it was followed by obel'chenko and shishkina after over a decade. obel'chenko identified two main periods for the settlement in the city and shishkina excavated a necropolis in the suburb (shishkina 1961, 1969); this necropolis was reconstructed by nil'sen (nil'sen 1965, 1966). at the end of the soviet era and right after uzbekistan became an independent republic, the institute of archaeology of the academy of sciences of uzbekistan launched a few test trenches in the citadel (berdimurodov and samibaev 1995). a decade later an italo-uzbek expedition (university of bologna and the institute of archaeology of the academy of sciences of the republic of uzbekistan) carried out large scale and systematic excavation campaigns focusing on the fortress/citadel between 2001 and 2008. the outcomes of this joint research significantly enriched our knowledge about kafir-kala. in the early years, the expedition discovered a part of the fire layer, which yielded nearly 500 specimens of seals. the fire layer is presumably one of the clear traces that reflects a tragic event which occurred during the arab conquests of the region (see mantelini and berdimurodov 2005; cazzoli and cereti 2005). figure 2. the view of the fortress: a) from the north-west, b) from the south-west (photo by tomoyuki usami) as mentioned earlier, the investigations by the japanese-uzbek expedition on the fortress/citadel began in 2013. the actual function of kafir kala before the occupation remained obscure after three seasons of excavations. there were some assumptions that the fortress may have functioned as a zoroastrian temple, as its structure four corner towers, a courtyard in the center and long benches (sufa) along the inner walls was similar to the structure of jar-tepa (berdimurodov and samibaev 1999). this site was a zoroastrian temple situated about 30 km east of kafir kala which was also presumably destroyed during the arab occupation of the region (see figure 1). however, the results of recent excavations on kafir kala shed some light on this issue. new findings such as fragments of wall paintings discovered in the citadel indicate that the site may have functioned as a fortress. nevertheless, we will have to excavate the last remaining small part in the north of the citadel in order to adequately explain its structure and function. 2. outline of the results of the investigations following the work done by the italo-uzbek expedition, the japanese-uzbek expedition has continued further investigations on this key site, mainly focusing on the fortress area. as mentioned above, nearly 500 specimens of seals were discovered by the italo-uzbek expedition on kafir kala. furthermore, during the japanese-uzbek excavations, over 200 specimens of seals have additionally been unearthed from the same fire layer. the finds of 700 seals in total from a single site is, thus far, the largest number ever found in central asia. the seals bear impressions of various divine, human and animal figures, as well as geometric shapes, and some have sogdian and bactrian inscriptions depicted on them (figure 3). the seal finds show a clear evidence that kafir-kala played a vital role in the region (see also cazzoli and cereti 2005; begmatov et al. 2016). figure 3. examples of sealsunearthed from kafir-kala (photo by alisher begmatov). a few dozen bronze and silver coins of the pre-islamic period were also found in the same fire layer, and these allow us to identify the date of this layer. the fact that the latest coins discovered from this layer are known to have been issued during the tarhun reign (ad 700-710) has reinforced our view that the fire at the citadel occurred in the beginning of eighth century. apart from coin and seal finds, various artifacts, metal and pottery and were unearthed. the preislamic pottery finds especially represent unique examples of the area. throughout the regular excavation work, the japanese-uzbek expedition has identified the whole structure of the fortress in the fire layer – discoveries of sufa (a bench made of rammed earth or bricks) along the eastern and western walls in 2013-2014; a courtyard in the center in 2014; corner towers in 2013-2016 remains of a large building with several square holes for wooden pillars, and finally fragments of wall painting in 2015-2016. additionally, the expedition discovered that the fire layer spread almost all over the inner ground surface of the fortress. this fact convinces us again that a huge fire must have occurred in the citadel. between 2013 and 2014, the japanese-uzbek expedition identified two sufas along the east and west walls. the sufa along the eastern wall is preserved in a relatively good shape ( figure 4). its length is about 22 m and its width about 130 cm. six square holes for wooden pillars at intervals of approximately 170 cm were found 280 cm west along the sufa. in 2014, the courtyard of the fortress was excavated and on the eastern side of the courtyard center a tree root was found, which may suggest that a tree might have existed inside the fortress. and from nearby, to the west of the tree root, there was a slope gently rising towards the remains of the large buildings that were excavated later in 2015. figure 4. the sufa along the eastern wall (by photo alisher begmatov). during the campaigns of 2013-2015, we excavated four corner towers of the fortress. the shapes of the towers were oval, but slightly different from one another, perhaps due to damage. in the fire layer of the south-eastern tower, a horde of sogdian coins was found. the discovery of a large building (trenches 6 and 7 mentioned below) with wall paintings from the fire layer is one of the most significant finds. the building, as well as other structures and materials in the fire layer are quite well preserved. although the wall paintings are damaged due to the fire, they provide us with valuable information about colors and decorative ornaments, and may also assist us to understand the principle governing the construction of the inner fortress. the building consists of a platform with evenly distributed wooden pillars; some of the square burned wooden bases of approximately 90 cm each side remained. during 2015-2016, the japaneseuzbek expedition excavated trenches 6 and 7 and identified 12 square holes in total. since we had noticed a few square holes in earlier excavations in the western part of the big building, we estimated that 18 wooden pillars were built to support the roof of the building. the size of this large building is 22 m in length and about 11 m in width. we also noted the presence of two entrances (altogether three entrances, including the one excavated in 2013) to rooms on the north-eastern side. the central room stands directly opposite the entrance into the fortress and is reached by a gently rising slope approximately 180 cm from where the courtyard begins. this central room perhaps is the key to completely understanding the function of the citadel; it will be excavated in the season. 3. digital survey of kafir kala fortress as previously mentioned, the japanese-uzbek expedition has so far made efforts toward the digital documentation of kafir kala fortress. we have employed different technologies to obtain the most accurate data. we first conducted a topographical survey of 5the fortress and its surrounding areas with the use of total station and gps, and generated a digital elevation model from the topographic data (fig. 5). it clearly shows the characteristics of the shape of the fortress. we scanned all the excavated areas (trenches) and structures on the fortress with the 3d laser scanner. a faro focus 3d scanner was mainly used in this work (fig. 6). this 3d laser scanner is nowadays widely used in archaeology and cultural heritage; it enabled us to efficiently create accurate 3d models of the objects. this scanner are lightweight and small and its portability is a great advantage in conducting surveys on the upper parts of the fortress, where we need to climb a steep hill every time we work. using the software "faro scene," we processed the scans and generated 3d models. at this step, the ortho-images were also derived and brought into a gis environment. all the models of the investigated area were overlapped and visualized in the gis environment (fig. 7). clearly, the use of gis is also quite significant in terms of the data management. along with the 3d laser scanning documentation, we have recorded x, y, and z coordinates of all the excavated objects with total station since we began the excavations on the site kafir kala, and taken numerous photographs in the survey process. figure 5. digital elevation model of the fortress. (a): top view (b): bird's eye view. figure 6. faro focus 3d laser scanner (photo by tomoyuki usami). one of the most important issues is, therefore, how we should manage such a huge quantity of data. we have tackled this with the use of gis: we put all the data obtained during the fieldwork into gisdatabase with a single coordinate system and organized them. particular attention is here drawn to the digital survey process of trenches 6 and 7 (fig. 7), where the large building with paintings was discovered during 2015-2016. the first step is planning the positions of the 3d laser scanner. as stated above, the building consists of a platform with evenly distributed wooden pillars, with some of the square burned wooden bases, approximately 90 cm on each side, remaining. the presence of entrances was also identified. we carefully made a decision on the placement of the scanner in order to fully and accurately scan such complex structures (fig. 8). then we acquired 12 scans and 8 scans in trench no. 6 and no. 7 respectively. next, processing of the scans was performed with the faro scene software (figure 9), and finally the models were made. ortho-images were at the same time derived and imported into the gis environment (fig. 10). in short, the productions were positive: for instance, a platform of building with evenly distributed wooden pillars and the square burned wooden bases, as well as the fire layer, were clearly described. this survey process no doubt has helped to develop our understanding on the construction principal of the fortress. figure 7. view of the citadel with ortho-images. a. from the north-east. b. from the top figure 8. trenches 6 and 7 with the positions of scanners. (a): trench 6 (from the south). (b) trench 7 (from the south-west). figure 9. screenshot of the point cloud (trench 7 as an example). figure 10. the imageof trench 7. 4. results of the investigations and future work this article demonstrates the results of the excavations by the japanese-uzbek expedition at kafir kala and the efforts of the digital survey, with a special focus on the large building with paintings discovered at the fortress area. kafir kala, which is well known for the discovery of unique seals, has been continually excavated and accurately documented with the use of 3d laser scanning technology. 3d measurement techniques are quite helpful to capture the complexity of the building and other structures that are difficult to describe in detail using the traditional survey method. although the investigations of our project are still in progress, we have gradually developed our discussion on the planning and arrangement of the inner fortress before the fire occurred. we have identified the gate, the sufas along the east and west walls, the courtyard in the center, the slope that leads to the entrance with burned brick-tiles, square holes with carbonized wooden bases of pillars, and walls and corridors surrounding the fortress as well as corner towers on four corners of the fortress. the building with paintings, which occupies the north-eastern part of the fortress, is no doubt a key point. it is difficult to infer the exact function of the fortress at this point. however, as a future work, we will continue the excavations and documentation on the remaining part of the citadel, and that will allow us to develop our understanding. we assume that the room with entrances connected to trench 7 possibly keeps an important secret of the fortress, which may lead us to discover the actual function a castle or a temple. 5. acknowledgements the excavation work was supported by the sasakawa scientific research grant from the japan science society. the authors are particularly grateful to m. tosi and s. mantellini for scientific support. 6. references a. begmatov et al. 2016. excavations of the uzbek-japanese expedition on the site of kafirkala. in international conference on archaeology of uzbekistan during the years of independence: progress and perspectives. samarkand. 116-118. a. e. berdimuradov and m. k. samibaev. 1999. khram dzhartepa-ii: k problemam kul'turnoj zhizni sogda v iv-viii vv. (the temple of jartepa-ii: on the problems of cultural life of sogd in the 4th-8th centuries). tashkent. a. berdimurodov et al. 2016. bully s buddijskim sjuzhetom s gorodischa kafirkala. arkheologija uzbekistana 2 (13): 51-60. s. cazzoli and c. g. cereti. 2005. sealings from kafir kala: preliminary report. inancient civilizations from scythia to syberia, vol. 11. 1-2, 133-164. r. dimartino. 2011. studio analitico della cultura materiale fra vii e ix secolo d.c. nella regione di samarcanda (uzbekistan): analisi morfo-tipologica, produzione e commercio della ceramica di kafir kala. dottorato di ricerca in bisanzio ed eurasia. universita di bologna. f. grenet and e. de la vaissière. 2002. the last days of panjikent. silk road art and archaeology 8, 155-96. g. v. grigorjev. 1941. tali-barzu kak pamyatnik domusul'manskogo sogda. t. petersburg) archive of the institute of history of material culture, russian academy of sciences fond 35, opis' 2, delo 92. g. v. grigorjev. k voprosu o khudozhestvennom remesle domusulmanskogo sogda. ksiimk, xii, m.l. ibn ḥawqal. 1976. kitāb şūrat al-arḍ. j. kramers in bibliotheca geographorum arabicorum ii, 3rd ed. leyden. s. mantellini and a. berdimuradov a. 2005. archaeological explorations in the sogdian fortress of kafir kala. in ancient civilizations from scythia to syberia, 11 (1-2): 107-132. m. e. masson. 1928. o mestonakhozhdenii sada timura davlet-abad. izvestija sredneaziatskogo komiteta. tashkent. 43-48. v. a. nil'sen. 1965. k voprosu o naznachenii sogdiiskogo zdanija okolo kafir-kala pod samarkandom. imku 6, 116-123. g. a. pugachenkova. 1983. ishtikhanskie drevnosti (nekotorye itogi issledovanii 1979). sovetskaja arkheologija, 259-270. g. v. shishkina. 1961. rannesrednevekovaja sel'skaja usad'ba pod samarkandom. imku 2, 192-222. t. uno and a. berdimurodov. 2013. the site of kala-i dabusia: sogdian city along the silk road. shinyosha. archaeological research of city centers of central asia received march 2017; revised july 2017; accepted august 2017. amphitheater of volterra: case study for the representation of excavation data carlo battini, university of genoa, italy elena sorge, soprintendenza archeologia della toscana, italy the present paper aims to describe how different surveying techniques can be used together for a better understanding of the artefact under investigation. digital surveying employing tools such as terrestrial laser scanning (tls) and structure-from-motion (sfm) software can be used together and made to interact with one another in order to compile an exhaustive database, rich in colorimetric and metric information. the case study examined in the present research is the discovery of the amphitheater of volterra. it was discovered in july 2015 during the clearance operation of a local stream, and it is located near porta diana, a few hundred meters from the roman theater, which was discovered in the last century. the excavation campaign occurred between october and november 2015, and it revealed the ridges of the structure's support walls. this structure is constituted by three levels, which together are nearly ten meters deep. in the post-processing phase, three-dimensional models were used to create the metric images necessary to study the stratigraphic units. moreover, during this phase, it was possible to test the ability of mobile applications to visualize and manage 3d models and excavation information. key words: mobile app, virtual, amphitheater, 3d models, digital survey sdh reference: carlo battini and elena sorgel. 2017. amphitheater of volterra: case study for the representation of the excavation data, sdh, 1, 2, 13 pages. doi: 10.14434/sdh.v1i2.23242 1. introduction the present paper aims to describe an experimental method to organize the data gathered during an archaeological excavation. using different surveying techniques simultaneously can contribute to a better understanding of the studied artefact on variable levels of detail. after having been validated, the gathered data were inserted into a database managed by mobile systems. the present paper studies a prototype of a mobile application, which is currently being implemented, to organize information with virtual visualizations. features of the project include adding textual information, saving graphic notes, executing linear measurements, and taking photographs localized in the virtual three-dimensional space. author's address: carlo battini, dicca department of civil, chemical and environmental engineering, university of genoa, 16145, via montallegro 1, genoa, italy; email: carlo.battini@unige.it; elena sorge, soprintendenza archeologia della toscana – florence, italy.; email: elena.sorge@beniculturali.it permission to make digital or hardcopies of part or all of this work is granted without fee according to the open access policy of sdh. © 2017 sdh open access journal 2. the discovery of the amphitheater the amphitheatre of volterra was rediscovered on the 8th july 2015, with the uncovering of a 40meter-long curved wall (fig. 1, fig. 2). thanks to the funding from the bank of volterra, it was possible to start the first excavation campaign, which confirmed the hypothesis that this structure was actually a forgotten roman amphitheater consisting of three floors (fig. 3). 1:270 c. battini figure 1. the discovery of the first stretch of roman masonry. figure 2. the curved line of the wall is visible. at this stage of the work, the presence of a roman amphitheater was hypothesized. amphitheater of volterra 1:271 figure 3. after just one day of excavation, archaeologists found the first confirmation of the presence of a structure hidden in the ground. the aim of our first season, which lasted six weeks, was primarily to confirm the existence of the monument itself, as well as to plan the subsequent excavation campaigns, which were postponed as winter was drawing near. for this reason, only the first layers, were removed, thus revealing the actual dimensions of the amphitheater. this sample provided the concrete evidence that the structure discovered was, in fact, an amphitheater composed of three floors, built with steps, which, however, were not found in this section. the cavea was vast and structured on three levels, prima, secunda, and summa cavea (or maenianum primum, secundum, and summum), separated by the praecinctiones, which can be described as thick concentric support walls. the arena is located at a height that we imagine to be two metres below where we reached at the end of the excavation; therefore, it should be circa 6 meters from the height of the current excavation level (fig. 4). architecturally, the amphitheater could likely have had a mixed structure, with strong similarities to the vallebuona theatre, i.e., inserted (where possible) into the slope of a hill, with other parts built on substructures, as research carried out in one of its carceres (chariot pit) seemed to prove. during the first survey, it was not possible to examine the ruins; therefore, currently nothing can be said about the actual state of conservation of the structure. the non-invasive geo-diagnostic surveys, carried out during winter in the area by soing, a company based in livorno, italy and a project 1:272 c. battini partner of the italian superintendency of cultural heritage, showed that the structure was even larger than previously suspected (fig 5). figure 4. the portion of the cavea of the amphitheater with the carceres of the first two orders delimited by the first and second praecinctio. figure 5. image of the non-invasive geo-diagnostic surveys carried out during winter in the area by soing (livorno, italy). amphitheater of volterra 1:273 thanks to a collaboration with the department of civil, chemical and environmental engineering (dicca) in genoa, it was possible to carry out digital surveys to study the samples taken. the reconstruction of the monument showed that its dimensions are around 82 m x 64 m, with three stone step floors, as demonstrated in the last excavated area (fig. 6). figure 6. location of archaeological assays performed. the total dimensions of the monument are wider than previously hypothesized – they seem to be around 82 m x 64 m. 3. excavation and recording information the excavation procedures were implemented using modern techniques of three-dimensional surveying, which can acquire a vast amount of data to then create a rich database for investigating the finds. nowadays, these methods are essential for data interpretation, conservation, and storage, as well as the appreciation of the finds, thanks to interacting systems of visualization [russo et al. 2011]. as usually happens, the techniques of digital surveying, i.e., terrestrial laser scanning (tls), topography and the structure-from-motion (sfm) technique [bertocci 2014] can be used together to 1:274 c. battini acquire the most information possible, depending on the peculiarities of the utilized technique. these surveying techniques have different levels of definition and errors. however, they can be compared and used simultaneously to enrich the database with new information. in the survey of the amphitheater of volterra, the three techniques were implemented in accordance with a project of precise survey, which was fundamental for analyzing the quality of the data collected as well as for defining an operative procedure for the campaign of data acquisition. in the first phase, a topographic support network was built to record the data coming from the tls (fig. 7) and sfm surveying techniques [russo et al. 2011]. figure 7. the first laser scanner survey campaign: a) scanner z + f imager 5006h; b) point cloud displayed within the leica cyclone 9.1 software. several scientific contributions have studied these two methods, which today are consolidated in the discipline of surveying, with the aim of determining the potential strengths and limitations of using them in alternation or at the same time [beraldin 2004; boehler and marbs 2004]. the application of these methods in the field of cultural heritage [boehler and marbs 2004; grussenmeyer et al. 2008] highlights how some parameters are fundamental to a successful survey. dimension, geometric complexity, and coloration can be pivotal factors in increasing or diminishing measurement errors at the moment of the three-dimensional acquisition. therefore, particular attention was given to the definition of a maximum admissible error. owing to the presence of sandy soil and ashlar walls and because of the dimension of the structures to be surveyed, the maximum allowable error was set at 1.5 cm. with this value, it was possible to store all the information on the collected stratigraphic units as well as to operate with an expeditious approach during the stage of survey. the processes leading to the evaluation of this error included the comparison of the databases acquired with the two different surveying techniques. for the comparative test, the decision was made to analyze the result of the first excavation campaign, which was characterized by extreme geometrical complexity as well as by the presence of areas with soil and well-preserved wall structures. using the z+f imager 5006h phase-shift laser, which can scan structures within a range of 79 metres with an accuracy of 2 mm, it was possible to record the geometrical structures of the archaeological site with the reflectance value of the material. the point clouds, which were needed to create a amphitheater of volterra 1:275 threedimensional model of the sample, were acquired using 14 laser locations. the point clouds were converted into a 3d-mesh model by simplifying data with a distance of 1 cm between all points (a value below the acceptable error defined in the preliminary stage). subsequently, the resulting threedimensional model underwent a process of cleaning and elimination of all imperfections. the final result was a model of 12,058,625 triangles (fig. 8). at the same time, the 3d data of this excavation area were captured using the sfm technique. the surveying process was carried out with a nikon d5000 reflex camera using an 18-105 mm lens. by maintaining the same focal aperture, it was possible to acquire a great number of shots, which were subsequently elaborated using photoscan (version 1.2.2; fig. 9). figure 8. three-dimensional model composed of approximately 6 million polygons deriving from the points cloud acquired with laser scanner. figure 9. the acquired images (all shot with the same focal length) were used to reconstruct the threedimensional model with the sfm technique. 1:276 c. battini after the processes of image alignment, and the construction of a sparse cloud, a dense cloud, a 3d model and texture, it was possible to create a 3d model with the same orientation and dimensions as the result obtained with the tls technique. the two three-dimensional models were compared using the cloudcompare 2.6.2 software, in order to evaluate the differences between the two surveying techniques. the values acquired were a mean-distance value of 0.002917 m and the standard-deviation value of 0.012793 m, which are both below the limit defined during the preliminary phase (fig. 10). based on this comparison, for the following three-dimensional surveys the research team decided to adopt the sfm technique (with topographic support) as the main tool to acquire metrical information. this technique proved to be less expensive in the phase of data elaboration and could also be used by less experienced staff, which avoided excessive interruption of the excavation processes. figure 10. the comparison of the two three-dimensional models (tls and sfm) within the open-source software cloudcompare 2.6.2. 4. mobile application for data management the collection of information is only the first step in the understanding an archaeological object. data must be catalogued and made available to several experts of various disciplines for interpreting the historical evolution of the site. the archaeological excavation is the moment in which all information not collected and catalogued will be definitively lost, disrupting the process of understanding of the object investigated [bezzi et al. 2011]. collecting and recording the information at regular intervals is, therefore, the first step to scientifically validate the information collected [grabner et al. 2003]. amphitheater of volterra 1:277 new research projects can be developed with the use of commercial platforms and open source tools, such as three-dimensional models, images, texts, and sounds to view and interact with the collected data. many studies, such as the archeoguide [vlahakis et al. 2002], were able to create guided tours in augmented reality with head-mounted displays. other projects, such as virtual rome, reconstructed the landscape of modern day rome and that of rome in the second century a.d. using common web browsers with three-dimensional online reproduction, including multimedia insights [pescarin et al. 2009]. currently, the 3d heritage online presenter (3dhop) platform, developed by the visual computing laboratory of the department isti-cnr of pisa, proves to be a promising project for visualizing digitized media assets on the web [potenziani et al. 2015]. the use of multiresolution encoding efficiently streams high-resolution 3d models, such as the sampled models usually employed in cultural heritage applications. in addition, it provides a series of ready-to-use templates and examples tailored to the presentation of ch objects, and it interconnects the 3d visualization with the rest of the dom webpage, making it possible to create integrated presentation schemes (3d and multimedia). the present research aims to determine how a mobile application can aid the understanding and the study of an archaeological object. in particular, the use of virtual reality can be an essential tool to understand the spatial morphology of the artefact, allowing the user to use and add information pivotal to the understanding of the object’s evolution. the aim is to create an application able to: three-dimensionally browse within the 3d models of the collected samples; take measurements from one point to the other; record textual and graphical notes located in the 3d space, in order to facilitate the study of the object; take photos using the camera of the mobile system, and locate these images in specific points of the 3d model. the present project was realized using unity 3d, a programming software usually adopted to develop videogames and able to provide all the necessary utilities to create a 3d environment. the development of the proposed application started by defining the features necessary to support user interaction. the application should visualize three-dimensional models, enable virtual browsing, and organize text and images. data can already be in the application, or it can be inserted during runtime by the user while browsing. in order to promote potential developments of this application and to give it greater versatility, the decision was made to create an sql database which can be updated within the server to exchange and save the collected information (fig. 11). figure 11. data are not stored locally but are sent through the appropriate script for connection to a central server in a mysql database 1:278 c. battini interactions with the designed application, for example virtual browsing, data addition, and connection with the database, are obtained through specific scripts written in c# (fig. 12) that manage the variables necessary to implement each action. for example, the script for graphic notes was created using a sequence of specific actions: acquiring the screenshot of the user’s viewpoint; creating a plane perpendicular to the visual axis, with the same dimension as the screen of the mobile support; applying the screenshot as plane texture; creating the graphic note defined by the user with the touchscreen of the mobile support, using the raycast function. in particular, with this function, it is possible to send invisible rays, parallel to visual axis, in correspondence to the point touched by the user on the touchscreen. the intersection of these rays with the plane, created previously, identifies a series of spatial points to which a red circle is then associated. because of the proximity of these circles with respect to their dimension, a continuous line can be simulated, which then leads to the creation of the graphic note; acquiring the mobile-support screenshot and its saving in the sql database. other scripts use the raycast function to identify the spatial coordinates of the points (essential to associate the information in the virtual space or to measure distances) or to identify the name of a gameobject (gameobjects are the fundamental objects in unity that represent characters, props and scenery), which is useful to retrieve specific information in the database (fig. 13). three-dimensional browsing requires the use of two pointers located at the corners of the screen, allowing both the movement (left sticker) and the rotation (right sticker) of the user’s viewpoint. lastly, the three-dimensional models used in the present study derive from the three-dimensional surveying procedures, which were simplified with retopology and advanced texturing techniques. to avoid increasing the dimension of the installation starter pack, the three-dimensional models are downloaded the moment the user chooses the sample to be visualised, by then saving it in a specific folder of the mobile support, making it available to be subsequently visualised. figure 12. use of unity 3d to develop the mobile app. on the right: the programming window of the scripts in the c# language amphitheater of volterra 1:279 figure 13. some screenshots of the application: a) list of the excavations on the server side; b) pop-up menu with programmed features; c) vr view of the excavation; d) drawing on the snapshot; e) window to enter a title and description; f) snapshot stored server side; g) visualization of information entered; h) window with storage of a photo and title. 1:280 c. battini 5. conclusion the technology of representation is constantly developing, offering the user increasingly higher interactivity. refined virtual-reality techniques and increasingly powerful technological tools are the fundamental requirements to create useful applications in the field of cultural heritage, which can help researchers to share and spread information. technological tools and applications should not be regarded as distant from the discipline of cultural heritage. on the contrary, they facilitate researching and sharing operations and are much more efficient than traditional means. technologies for representation are today in continuous development and allow an ever-greater interactivity by the end user. techniques of virtual reality increasingly refined and technological tools ever more powerful are the essential ingredients for making useful applications in the field of cultural heritage. the possibility of using these state-of-the-art tools can be of great interest to help researchers to share and disseminate information. these tools and technological applications should be seen as tools to facilitate the operations of research and sharing in a much more effective way than can be done with traditional methods. 6. acknowledgements the authors wish to thank: valeria d’aquino, giacomo baldini, giano snc, paolo nannini, giovanni roncaglia, stefano sarri, pasquino pallecchi, domenico zaccaria, the municipality of volterra, stokholm university, cassa di risparmio di volterra, fondazione cassa di risparmio di volterra and franca taddei. the author would also acknowledge that paragraph 2 was written by e. sorge, while c. battini wrote the remaining paragraphs. 7. references elsevier bv. 58 (jul), ex1, https://doi.org/10.1016%2fs0924-2716%2804%2900027-9 lorenzo còveri. il genovese del quattrocento, lingua della repubblica. in italia settentrionale: crocevia di idiomi romanzi. de gruyter, https://doi.org/10.1515%2f9783110910346.261 giuliano de felice, and maria giuseppina sibilano. 2010. strategie di documentazione per la ricerca e la comunicazione archeologica. il caso di faragola (foggia, italia). virtual archaeology review. 1 (may): 95. https://doi.org/10.4995%2fvar.2010.4696 fabio remondino. 2011. heritage recording and 3d modeling with photogrammetry and 3d scanning. remote sensing. 3 (may): 1104--1138. https://doi.org/10.3390%2frs3061104 stefan hynst, michael gervautz, markus grabner, and konrad schindler. 2001. a work-flow and data model for reconstruction, management, and visualization of archaeological sites. in proceedings of the 2001 conference on virtual reality, archeology, and cultural heritage vast '01.. acm press. https://doi.org/10.1145%2f584993.585000 p. grussenmeyer, e. alby, t. landes, m. koehl, s. guillemin, j. f. hullo, p. assali, and e. smigiel. 2012. recording approach of heritage sites based on merging point clouds from high resolution photogrammetry and terrestrial laser scanning. isprs international archives of the photogrammetry, remote sensing and spatial information sciences. xxxix-b5 (jul): 553--558. https://doi.org/10.5194%2fisprsarchives-xxxix-b5-553-2012 lanier, jaron. 1994. a virtual reality-based simulation of abdominal surgery. defense technical information center. luigi calori, carlo camporesi, and sofia pescarin. 2009. virtual rome. in proceedings of the 14th international conference on 3d web technology web3d '09. acm press. https://doi.org/10.1145%2f1559764.1559792 marco potenziani, marco callieri, matteo dellepiane, massimiliano corsini, federico ponchio, and roberto scopigno. 2015. 3dhop: 3d heritage online presenter. computers & graphics. 52 (nov): 129--141. https://doi.org/10.1016%2fj.cag.2015.07.001 michele russo, giorgia morlando, and gabriele guidi. 2007. low-cost characterization of 3d laser scanners. in videometrics ix. j.-angelo beraldin, fabio remondino and mark r. shortis (eds.). spie. https://doi.org/10.1117%2f12.705712 v. vlahakis, m. ioannidis, j. karigiannis, m. tsotros, m. gounaris, d. stricker, t. gleue, p. daehne, and l. almeida. 2002. archeoguide: an augmented reality guide for archaeological sites. ieee computer graphics and applications. 22 (sep), 52--60. https://doi.org/10.1109%2fmcg.2002.1028726 received october 2016; revised september 2017; accepted november 2017. archiving the past while keeping up with the times valentijn gilissen and hella hollander data archiving, networked services (dans), the netherlands the e-depot for dutch archaeology started as a project at data archiving and networked services (dans) in 2004 and developed into a successful service, which has ever since been part of the national archaeological data workflow of the netherlands. while continuously processing archaeological datasets and publications and developing expertise regarding data preservation, various developments are taking place in the data landscape and direct involvement is necessary to ensure that the needs of the designated community are best met. standard protocols must be defined for the processing of data with the best guarantees for long-term preservation and accessibility. monitoring the actual use of file formats and the use of their significant characteristics within specific scientific disciplines is needed to keep strategies upto-date. national developments include the definition of a national metadata exchange protocol, its accommodation in the dans easy self-deposit archive and its role in the central channelling of information submission. in an international context, projects such as ariadne and parthenos enable further developments regarding data preservation and dissemination. the opportunities provided by such international projects enriched the data by improving options for data reuse, including the implementation of a map-based search facility on dans easy. the projects also provide a platform for sharing of expertise via international collaboration. this paper details the positioning of the data archive in the research data cycle and presents examples of the data enrichment enabled by collaboration within international projects. key words: data archiving, preservation, standards, collaboration, access, portals sdh reference: valentijn gilissen and hella hollander. 2017. archiving the past while keeping up with the times, sdh, 1, 2, 12 pages. doi: 10.14434/sdh.v1i2.23238 1. dans and the research data cycle data archiving and networked services (dans) is the dutch research data archive. predecessors in data archiving in the netherlands date back to 1964; dans was established in 2005 following revisions of the existing initiatives by the royal netherlands academy of arts and sciences (knaw) and the netherlands organization for scientific research (nwo) [dans 2017]. dans has a notable role within both of its founding organizations. the knaw is a branch research organization of fifteen internationally renowned dutch research institutes, including dans. dans promotes sustained access to digital research data and carries out research on this topic; its data author's address: valentijn gilissen and hella hollander, data archiving and networked services (dans), anna van saksenlaan 51, 2593 hw den haag, the netherlands; email: valentijn.gilissen@dans.knaw.nl; hella.hollander@dans.knaw.nl permission to make digital or hardcopies of part or all of this work is granted without fee according to the open access policy of sdh. © 2017 sdh open access journal archiving services additionally serve as essential support to all other research institutes. nwo is the organization which funds scientific research at public research institutions in the netherlands. research which is funded by nwo should ensure that the resulting data is archived in a sustained form, which dans enables through the use of its online archive, the electronic archiving system: easy [dans easy 2017]. the research data cycle can be summarized as follows: a researcher produces research data; another researcher needs to be able to find, access and re-use the data for new research; the initial research should be referenced; the new research results in its own dataset, and so on. the policies of dans aim to ensure the sustainability of the research data cycle in the long term: to ensure preservation of data, dans states that data should be archived in a repository which complies with international standards and guidelines of trustworthiness: a certified "trusted digital repository" (tdr). to support the use of trusted digital repositories, funding organizations should oblige researchers to deposit their research data in a tdr. in order to make data accessible, dans promotes open access, but understands that for reasons such as privacy sensitivity of certain data files it is not always possible to make data available without restrictions. the position of dans is "open if possible, protected if necessary." to accommodate re-use of data as well as to attribute credit to researchers, persistent identifiers, unique hyperlinks following a specific manner of resolving in order to remain valid over the long time should be used for referencing data sources in scientific publications. a dataset should have the same scientific value as a research article; the persistent identifier could be likened to the isbn code of a publication. archives use the open archival information systems (oais) reference model [ccsds 2012] to identify all sorts of aspects which need to be taken into account when managing data between submission by a data producer and dissemination to a data consumer. while the oais is not a set of instructions on "how to build your own archive," the functions and concepts defined within the oais allow an internationally shared view of what an archive should encompass. the reference model supports the methods for certifying an archive as a trusted digital repository according to international standards. there are three degrees of certification for tdrs: the basic/essential certification of the data seal of approval (dsa); the extended certification of the nestor seal (din 31644); the formal certification as an iso standard (iso16363). the dans archiving system easy was awarded with the nestor seal in early 2016 and was the first digital archive in the world to obtain this certificate. the oais reference model is presented as a schematic process of "management" situated between the producer and the consumer. merely having an archive that is oais-compliant would not fully meet the aims dans has as a digital archiving organization for research data. if the archive takes its proper place within the research data cycle, the flow of data should continue with the data consumer becoming a new data producer, using persistent identifier citation for referencing the source data. 2. inside dans easy figure 1 shows the homepage of dans easy with its direct features highlighted in text boxes. a central search bar can be used to search through all metadata of all datasets. information provided with “search help" informs the user of search options such as the use of and/or booleans and wildcard characters. alternatively, the links to advanced search and browse options can be followed for performing more specific searches. a prominent button for submitting a new dataset deposit is presented in the center of the screen. for proceeding with a deposit or for accessing certain data files, users need to be logged in to the system; the options for registering or logging in remain present in the top of the screen until the user does so. it is not necessary to log in in order to search and browse through datasets, or to view their metadata restricted access only pertains to downloading or opening data files. the bottom of the screen holds a number of links to background information on the use of data, including full instructions for citing data. additionally, the trusted digital repository certificates which were obtained for easy are presented here by their seals, with links to detailed information on the subject. figure 1. an overview of the homepage of easy, dans' electronic archiving system. datasets in easy are described with metadata following the international standard (qualified) dublin core. the advanced search options enable searches within specific dublin core metadata fields: title; creator; description; subject; coverage; identifier. a good example to show the benefit of using advanced search is a researcher who wants to find archaeological research projects conducted in the center of the dutch city of houten. "houten" is also the dutch word for "wooden." to avoid searching for houten and additionally getting all the results of datasets that mention wooden objects in their metadata, it is recommended that the user perform an advanced search on houten within the metadata field "coverage" (temporal and spatial coverage). alternatively, if a researcher wants information on wooden artefacts but wants to avoid results which are only about the city of houten, a search for "houten" should be done within the metadata field "subject". browse options include means to refine search and browse results by audience, access category or by additional search queries. if, following on the above example, the researcher gets many results for a search for "houten" in "coverage," it could be that the dutch word for "wooden" is part of a street name in a different city. a follow-up search could be another advanced search for "coverage": "utrecht", which is the name of the province wherein the city is located. when a dataset is viewed, the user will see it presented in three tabs: overview: a highlighting of the dataset title and (abstract) description from the metadata, which can be accompanied by pictures, illustrations, logos to represent the dataset. the overview also displays the correct way to cite the dataset in literature, by use of the "digital object identifier" (doi) persistent identifier. description: all of the (qualified) dublin core metadata provided for the dataset. data files: the data files published with the dataset, with options for showing additional details if available, and for downloading if accessible. if not accessible, the page explains what conditions need to be met in order to download the data. datasets are described and deposited by researchers themselves. a deposit module takes researchers through the dublin core metadata fields as well as a page where they can upload the data files. few fields are mandatory but a depositor is encouraged to fill in as much metadata as possible to make a dataset findable as well as understandable to anyone. if a research project was done in the city of houten but houten is only part of the title and not specifically entered as "spatial coverage," the dataset would not be found in the use-case scenario given above. when a dataset is being deposited, a data manager of dans checks the incoming dataset for completeness and understandability. the manager may make minor changes or additions to the metadata or may provide migrations of file formats if this benefits the long-term preservation and accessibility of the data. the dataset will only be published after the data manager has performed all of the relevant quality assessments and preservation actions. at the beginning of 2017, easy contains over 33,000 published datasets from various scientific disciplines; it continues to receive new datasets on a daily basis. dans especially accommodates the scientific disciplines classified under the humanities, social sciences and behavioral sciences; the dutch technical universities manage exact mathematical science data themselves, in co-operation with dans, to ensure sustainable storage. the vast majority of datasets in easy are datasets from the discipline of archaeology, totalling over 27,000 datasets and continuously growing (figure 2). the relatively large number of archaeological datasets can be attributed to the successes of the e-depot for dutch archaeology (edna), which started as a project at dans in 2004 and is now embedded as a service within easy [edna 2017]. edna raised awareness of the necessity of data archiving within the archaeological scientific community, digitized many gray literature reports for publication in easy, and enabled all of the archaeological project bureaus, municipalities and universities to use easy to store and disclose their larger datasets. figure 2. a collage for a poster, showcasing the archaeological content of easy. while a number of archaeological datasets in easy contain only a single pdf of a publication, described with dublin core metadata, there are many larger datasets available, and dans especially aims to archive and publish such sets, which generally contain data tables, photographs, digital drawings and specialist reports alongside the final publication (figure 2). a dataset should contain the final data of a research project and the size of the dataset generally matches the extent of the fieldwork conducted. the dataset of a non-intrusive survey will contain only a single publication; the dataset of an intrusive survey may contain more files such as data tables. the dataset of a full excavation may contain a large set of photographs taken in the field, digital drawings and scans of drawings of pits and profiles, daily reports, a database with registration tables and specialist determination tables for the different material types of the finds. 3. dans participation in international projects by participating in (inter)national projects and infrastructures, dans contributes to sustainable access to research data. dans is involved with a large number of projects; three projects will be highlighted here which are of special interest to archaeologists. 3.1 carare from 2010 to 2013, 29 european organizations worked together in the european carare project [carare 2017] to make two million archaeological and architectural objects accessible via the europeana website. this target has been exceeded by 5 million objects. dans contributed the archaeological publications that are published in easy (figure 3). in the process, dans gained valuable experience with metadata mapping, harvesting of selected metadata records from easy, having the resource displayed on a map through translation of the national dutch coordinate system and having the resource link back to the content via the persistent identifier. figure 3. the first version of the carare map portal, which was available until december 2016. the portal is now incorporated in europeana collections [europeana 2017] 3.2 ariadne ariadne stands for advanced research infrastructure for archaeological dataset networking in europe. 23 partners from 16 european countries collaborated in the ariadne project from february 2013 to january 2017 with the overall goal to establish a european research infrastructure for the integration of archaeological datasets. in addition, tools were developed that provide researchers access to this data [ariadne 2016]. dans contributed data from the archaeological e-depot edna and the digital collaboratory for cultural dendrochronology (dccd), making the data in easy more visible internationally via the ariadne portal. dans collaborated closely with leiden university in data mining and linked data activities, which allowed the mapping and translation of dutch concepts from the national archaeological vocabulary to international vocabularies within the european infrastructure. additionally, data mining was performed on the content of pdf publications from datasets that were lacking information in their metadata on coordinates. this enabled the addition of correct coordinates to the metadata of about 3500 datasets, a result of clear mutual benefit to dans (elaboration of metadata) and the ariadne project (adding of content). on the ariadne portal (figure 4) thousands of resources can be found via a map, a timeline search and a keyword search, all with various options for filtering results. the mapping of keywords from national vocabularies allows comprehensive cross-searching the resources from all partners. figure 4. the ariadne portal [ariadne 2017]. collaboration within ariadne led to the publication of new guides to good practice, including a guide on dendrochronology [brewer and jansma 2016] and a guide on 3d data [trognitz et al. 2016]. the activities performed and the experience gained in order to have the easy content displayed on the ariadne portal additionally enabled dans to develop a map display feature in easy, a feature which was implemented at the beginning of 2016. search and browse results are initially shown in a list, as has always been the case, but the display can now be switched from "list" to "map" (figure 5a). all of the search/browse results which include coordinates are then displayed on openstreetmap in agglomerations of results. zooming in will result in the agglomerations to spread out, to the point that single results are found. figure 5a. the map display feature for browse results in easy. figure 5b. a zoom on search results for houten in the map display feature. figure 5b shows a zoom on an advanced search for "houten" in "coverage," following on from the example presented in section 2. with several dozen hits, showing the search results in list display would not be very helpful for a researcher who is only interested in archaeological research carried out in the center of the city. switching to map display allows zooming to the location and selecting single results in the target area. apart from the invaluable contributions to the development on the map display feature, dans was also able to make use of the work done on mapping of metadata within the ariadne project to contribute to the national development of an xml standard for archaeological data tables. this xml standard was developed by the dutch archaeological sector as a national exchange protocol [sikb 2016]. the protocol serves to provide more and better metadata for archaeological projects and to have terminology standardized according to the national archaeological vocabulary. it allows for a complete export of archaeological database tables to standardized xml, which means that every archaeological company can use their own database system but still provide an export to make the data interoperable. dans implemented the protocol in easy to the effect that data depositors can now upload the xml with new deposits and have metadata from the xml extracted in the easy dublin core metadata fields. this has proved to be a very efficient means to provide full and correct metadata with a data deposit, saving time and effort for depositors as well as for data managers. 3.3 parthenos parthenos stands for pooling activities, resources and tools for e-heritage research networking optimization and synergies. whereas carare and ariadne focused on archaeology, parthenos empowers digital research across all fields of the (digital) humanities, including history, language studies, cultural heritage and related fields. the interdisciplinary four-year project, which began in may 2015, provides a thematic cluster of european research infrastructures (e-infrastructures and other world-class infrastructures), carries out integrating initiatives, and builds bridges between different but interrelated fields of research. [parthenos 2017] central topics are the implementation of common aaa (authentication, authorization, access) and data curation policies within the framework of the data lifecycle, including long term preservation, certification and intellectual property rights (ipr). expected key results of the project are: guidelines on data management: to produce a coherent, authoritative, well accepted set of policies/guidelines/tools concerning the management of data lifecycle and related issues such as ipr, quality and so on. standardization and semantics: to produce a wide set of standards and semantics, originated from community needs and tailored to the methodology and intended use by researchers. services and tools: to produce a coherent set of tools for carrying out research using and reusing data. dans is working on the harmonization of research data management within the various disciplines and the certification of their repositories. dans contributes its guidelines and experiences on data management to the parthenos resources. in return, the international collaboration on data policies and protocols allows enhancing dans' own guidelines. one of the topics covered by such guidelines is the topic of preferred formats, the best choices for file formats over the long term. dans published its preferred formats guide in september 2015 [dans 2015], which details the best options for long-term preservation per file type (figure 6). the guidelines aim to make data available in file formats which are, as far as possible: open formats; frequently used; independent of specific software, developers or vendors. a working group within dans is responsible for maintaining the guidelines, which can be subject to revision based on issues such as new file formats occurring in dataset deposits, or new software developments. parthenos functions as a platform for international discussions on the subject, which also contribute to updates of the preferred formats guidelines. figure 6. the overview table of the dans preferred formats guidelines. 4. conclusion this paper explained the role of dans as a trusted digital repository within the research data cycle and described the easy electronic archiving system and its archaeological content. furthermore, it gave examples of results from dans' involvement within three international projects and the mutual benefits from dans bringing content to those projects and enriching its own output in return. the paper has different objectives, depending on the background of the readers. researchers are encouraged to use the described services. the search and browse options of dans easy allow finding datasets from dutch scientific research including a large number of archaeological records; the portals of europeana/carare and ariadne assist in finding datasets of sources from all over europe which can be useful for further research. when creating data, recommendations and guidelines coming from the parthenos project should help in keeping the data findable, accessible, interoperable and re-usable. when re-using data, researchers are strongly encouraged to keep the research data cycle alive by citing the data source and by depositing their own datasets. storing data in a trusted digital repository will give the best guarantees for sustainability in the long term. organizations with a tdr like dans will continue to provide content to existing portals and to participate in projects to disclose data. therefore, new datasets deposited in a tdr such as easy will also be enriched by making them findable through innovations such as the ariadne portal. readers working in other parts of the research data cycle such as data management are encouraged to participate as much as possible in (inter)national projects and to work together to promote sustained access to research data. the examples given in this paper show that international collaboration can generate great mutual benefits. there is much to gain in connecting data across borders and disciplines, for anyone involved – and there is no reason not to work together; we should all be friends. 5. references ariadne advanced research infrastructure for archaeological dataset networking in europe. 2016. building a research infrastructure for digital archaeology in europe. ariadne booklet, december 2016. retrieved april 13, 2017 from http://www.ariadne-infrastructure.eu/about ariadne advanced research infrastructure for archaeological dataset networking in europe. ariadne portal. retrieved april 13, 2017 from http://portal.ariadne-infrastructure.eu brewer, peter and esther jansma. 2016. dendrochronological data in archaeology: a guide to good practice. archaeology data service / digital antiquity. guides to good practice. retrieved april 13, 2017 from http://guides.archaeologydataservice.ac.uk/g2gp/dendro_toc carare connecting archaeology and architecture in europeana. retrieved april 13, 2017 from http://www.carare.eu ccsds consultative committee for space data systems. 2012. reference model for an open archival information system (oais). recommended practice ccsds 650.0-m-2. magenta book, washington, dc: ccsds secretariat. retrieved april 13, 2017 from https://public.ccsds.org/pubs/650x0m2.pdf dans data archiving and networked services. retrieved april 13, 2017 from https://dans.knaw.nl/en dans data archiving and networked services. 2015. file formats, preferred formats and accepted formats. preferred formats. version 3.0, september 2015. retrieved april 13, 2017 from https://dans.knaw.nl/en/deposit/information-about-depositing-data dans easy data archiving and networked services electronic archiving system. retrieved april 13, 2017 from https://easy.dans.knaw.nl/ui/home edna e-depot for dutch archaeology. retrieved april 13, 2017 from https://dans.knaw.nl/nl/over/diensten/data-archiveren-en-hergebruiken/easy/edna europeana. europeana collections. retrieved april 13, 2017 from http://www.europeana.eu/portal/en parthenos pooling activities, resources and tools for e-heritage research networking optimization and synergies. retrieved april 13, 2017 from http://www.parthenos-project.eu sikb – stichting infrastructuur en kwaliteitsborging en bodembeheer. 2016. sikb0102 archeologie, xml exchange standard. retrieved april 13, 2017 from http://www.sikb.nl/datastandaarden/richtlijnen/sikb0102 trognitz, martina, kieron niven, and valentijn gilissen. 2016. 3d models in archaeology: a guide to good practice. archaeology data service / digital antiquity. guides to good practice. retrieved april 13, 2017 from http://guides.archaeologydataservice.ac.uk/g2gp/3d_toc received march 2017; revised july 2017; accepted august 2017. digital 3d reconstructed models: using semantic technologies for recommendations in visualization applications steffanie wefers, ashish karmacharya, and frank boochs, mainz university of applied sciences, germany mieke pfarr-harfst, technische universität darmstadt it is common for cultural heritage applications to use spatial and/or spectral data for documentation, analysis and visualization. knowledge of data requirements coming from the cultural heritage application and technical alternatives to generate the required data, based on object characteristics and other influential factors, pave the way for the optimal selection of a recording technology. it is a collaborative process, requiring the knowledge of experts both from cultural heritage domains and from technical domains. currently, this knowledge is structured and stored in an ontology (so-called coschkr). its purpose is to support ch experts who are not familiar with technologies by prescribing an optimal spatial or spectral recording strategy adapted to the physical characteristics of the cultural heritage object and the data requirements of the targeted ch application. the creation of digital 3d reconstructed models for analysis and visualization purposes is becoming more and more common in humanities disciplines. therefore, an implementation of the mechanisms involved in visualization applications into this ontology would have huge benefits in creating a powerful recommendation solution. illustrating the overall structure of coschkr, this paper addresses and discusses challenges in structuring the processes of cultural heritage visualization and implementing these into the ontology. key words: ontology, facts and hypothesis, inference, cultural heritage, data processing sdh reference: stefanie wefers et al. 2017. digital 3d reconstructed models: using semantic technologies for recommendations in visualization applications. sdh, 1, 2, 537-546. doi: 10.14434/sdh.v1i2.23327 1. introduction visualization and 3d reconstructed models have become more and more an established feature of research in the field of cultural heritage (ch). since the 1980s, 3d handmade models have developed a strong tradition as a medium of communication in knowledge transfer. the most popular author's address: stefanie wefers, ashish karmacharya and frank boochs, i3mainz, institute for spatial information and surveying technology, hochschule mainz, university of applied sciences, lucy-hillebrand-straße 2, d-55128 mainz, germany; email: stefanie.wefers@hs-mainz.de; ashish.karmacharya@hs-mainz.de; frank.boochs@hs-mainz.de; mieke pfarr-harfst, technische universität darmstadt, el-lissitzky-straße 1, 64287 darmstadt, germany; email: pfarr@dg.tu-darmstadt.de permission to make digital or hardcopies of part or all of this work is granted without fee according to the open access policy of sdh.© 2017 sdh open access journal application is displaying these digital 3d reconstructed models in a video, especially in the context of an exhibition. one of the reasons for this application is that the 3d models transfer a message through images that are understood as a universal language requiring no further encoding [pfarrharfst 2016]. three-dimensionality within these models makes it possible for a broad public to get an idea of complex spatial interrelation and to contextualize ch objects within a 3d setting. therefore, digital 3d reconstructed models are understood (in addition to written text and spoken words) as a further medium for storing and transferring knowledge [pfarr-harfst 2016]. creating scholarly digital 3d reconstructed models of ch objects is a multidisciplinary task [pfarrharfst and wefers 2016]. the knowledge about the details of the object that is to be reconstructed comes from ch experts such as archaeologists, art historians, or historians. in a first step, the basis for the modeling has to be set through (1) a compilation of relevant available publications, 3d data, spectral data, images, descriptions, drawings, paintings etc. and, if necessary, (2) a digitization of non-digital information. relevant information is compiled by ch experts; however, the data acquisition is done by technical experts capable of properly applying technologies such as 3d recording or spectral recording. afterwards, modelers begin to create the digital 3d reconstruction, working with the compiled information and data. they regularly discuss interim reconstruction results with the ch experts, allowing for adjustments and clarification of (so far) neglected details, until the digital 3d representation reflects the vision of the ch expert. how detailed this information needs to be is very often underestimated by ch experts who, both as information recipients and providers, are used to two-dimensional visualizations within their domain. in contrast to 3d reconstructed models, such 2d visualizations easily allow missing information to be hidden. for digital 3d reconstructed models, however, detailed three-dimensional descriptions of the ch object and its constructive parts are needed; if it has been visualized in its context, its spatial position and context are also required. many decisions have to be made by the ch expert: for some of these, no scholarly evidence might exist; for other decisions further reconstruction options, which are still under scholarly discussion and might not be concluded at all, have to be neglected. supporting such visualization applications could be achieved by implementation of application requirements in a machine-readable ontology-based knowledge representation. through semantic web technologies, knowledge can be inferred from ontology-based knowledge representations. the overall idea is to create a recommendation platform for ch experts that highlights emerging challenges during 3d reconstruction projects while taking individual input interactively into account. 1.1 knowledge representation with the overall intention of prescribing the optimal selection of spatial and spectral recording technologies and the technical strategy or strategies for fulfilling the data-specific demands of ch applications, an ontology-based knowledge representation is currently under development. it is called coschkr, and acronym for color and space in cultural heritage knowledge representation [karmacharya et al. 2016; wefers et al. 2016]. cosch is the acronym of the european cost action td1201: color and space in cultural heritage (cosch), during which the foundation of the knowledge representation was developed (http://www.cosch.info). cultural application defines the requirements and demands that are necessary for successful completion of the application. these requirements and demands are related to the nature and quality of data, while the technologies and the underlying components are able to generate required quality data. therefore, coschkr is driven through the primary axis of ch applications → data ← technologies. this axis is represented through the top-level classes, ch applications, data and technologies, and is defined through their interrelationships. the classes are further defined through rules, which describe their semantic constructs. in addition to these three classes defining the primary axis of the ontology, the ontology includes classes representing the physical (ch) objects through the top-level class physical thing, as well as other external impacts through the top-level class external influences. both classes represent restraining/supporting constraints to the technologies. these constraints are semantically constructed through rules inside these two classes. physical objects represented in physical thing provide restraining/supporting constraints on technologies, e.g., through their size and/or shape. likewise, specifications of the project, such as available budget, which are encoded inside external influences, may have an impact on the recommendation of an optimal recording device, e.g., due to higher costs than are available in the project budget. the structure, relationships, and rules are logic-based constructs encoded into the ontology through machine interpretable language (owl). this enables machines to participate and assist in interpreting and concluding, through reasoning, the logic-based facts. the top-level classes and relations of coschkr are illustrated in fig. 1 [for details see karmacharya; wefers et al. 2016]. figure 1. top-level classes and relationships of the ontology coschkr. the rules binding ch applications and technologies through data initiate navigation through the ontology that requires knowledge assertions from the user. in many cases, these assertions reflect the actual discussion between a humanities expert, who would like to record a physical ch asset, and a technical expert, who is actually recording the asset. this assertive mechanism feeds the facts into the ontology, and these facts provide the basis for the rules to be inferred inside the ontology. 2. state-of-the-art the word "semantic" implies "meaning" or "understanding". the technologies implementing semantics aim at explicitly describing the meaning of content; they do not aim at the content itself. the evolution of the semantic web framework in the late 20th century has given a boost to technologies such as artificial intelligence, which exploit the semantics of content to reason onto the conclusions [berners-lee 2006]. in this context, ontologies play a major role: they are traditionally used to structure knowledge by defining terms and relationships describing a specific knowledge domain [heflin 2004] and were first used in philosophy dealing with the theory of existence [hofweber 2011]. they define the descriptive semantics of the various entities related to a specific knowledge domain. the semantic web considers ontologies as the main medium to express and represent structured knowledge. it uses a standardized web ontology language (owl) (https://www.w3.org/owl/) bringing expressive and reasoning power to the semantic web. so far, ontologies have been rarely used for implementing and representing 3d visualization because: few studies have been carried out to evaluate and classify 3d visualization techniques; they mainly focus on the interaction techniques [v-must 2013] and/or software/hardware configurations [potter and wright 2006] used in specific applications; and different classifications, terminologies, and taxonomies have been defined, each for specific aims and applications [shu et al. 2008]; due to the huge variety they cannot be considered for a heuristic view, which is needed to develop an ontology with the above described purpose. even when such kinds of ontologies are developed, they are used to define problems in designing visualization techniques and/or their preferred application areas, without giving details on the specifications of these techniques and where and how they are intended to be used. for example, the top level visualization ontology (tlvo) provides a common vocabulary to describe visualization data, processes, and products [brodie et al. 2004]. more recent research analyzes visualization taxonomies and proposes modifications of tlvo [pérez et al. 2010]. other examples are: a visualization ontology that adds semantics for the discovery of visualization services [shuet al. 2008]; "unifying ontology" for visualization systems that allows reasoning on the optimal use/reuse/synthesis of graphical representation for a special situation [voigt and polowinski 2011]. for our purpose, the ontology lacks many basic concepts, e.g., there is no representation of 3d visualizations or the targeted applications, and last but not least, the ontology is no longer accessible; and a query based selection of relevant visualization techniques [metral et al. 2012]. the work on visualization ontologies primarily focuses on either classifying hardware/software and their configurations or selecting specific representations. they primarily work on querying asserted knowledge for the answers. however, the selection of visualization techniques and workflows, based on the demand(s) of the required data for a specific purpose, requires the following essential components to be listed in an ontology: (1) visualization techniques/workflows, (2) technologies dictating the data generation, (3) the content and quality required for the visualization, and (4) the accessibilities of information/data that will generate the required content. moreover, such a kind of an ontology would need to bind these components together through proper relationships and rules. if these preconditions were met, the ontology could be used to develop optimal visualization techniques and workflows adapted to the needs of the targeted application. 3. essentials of coschkr currently, coschkr relies on facts, which are either given by the existential physical facts of the ch asset or through proven technical capabilities of technologies, which are needed for the entire process of an optimal data acquisition and processing adapted to the requirements of the targeted application. in both cases, the ontology infers from the facts, which are logically encoded as rules, in various related classes. the ontology requires asserted knowledge of these facts before inferring them. an illustration of such an assertive process is presented through the example of an already implemented typical ch application, which focuses on the "analysis of geometric alteration". an example for such an application is a project focusing on archaeological waterlogged wooden samples, which were 3d documented before and after conservation treatment in order to be able to evaluate geometric alterations caused by the conservation treatment through comparison of the data sets [e.g., mazzola 2009] (http://www.rgzm.de/kur/index.cfm?layout=holz&content=start). as an example, we present the first basic semantic rules for this ch application geometric alteration (which is a subclass of the top-level class ch applicationsm; for more details, see karmacharya, wefers, boochs 2016; wefers, karmacharya, boochs 2016). it requires high quality 3d data representing the object for at least two instances. the class is semantically constructed through the rule that states this requirement. the rules are defined through description logic statements (dl) [baader 2003], but for simple explanations, we use equivalent lexical statements. equation 1 states that the class geometric alteration requires at least two 3d data of the object, and equation 2 states that these data need to be of high quality. geometric alteration has requirement on data minimum 3d data has representation of one physical thing (eq. 1) geometric alteration has requirement on data minimum 3d data has quality high quality (eq. 2) the equations 3 to 5 illustrate the rules inside the ontology, which semantically define different components of the technology, while equation 6 displays the inference result of those rules. structured light 3d scanning hashas main operating instrument structured light 3d scanners (eq. 3) structured light 3d scanners has measurement principles triangulations (eq. 4) triangulations has generation of data 3d data has quality high quality or medium quality or low quality(eq. 5) this permits the inference: structured light 3d scanning has generation of data 3d data has quality high quality or medium quality or low quality(eq. 6) the above-described first selection of technologies is afterwards iteratively inferred for their optimal suitability against the restraining/supporting constraints of the characteristics of physical objects (inside physical thing) or other external impacting factors (inside external influences). for example, the size of the object, in this case the archaeological waterlogged wooden samples, plays a major role in filtering out technologies. the size of these wooden samples is on average 10 x 6 x 6 cubic cm and is asserted as "small". through this classification, instruments and their subsequent technical processes suitable for recording only large physical objects such as laser scanners and laser scanning are filtered out. these assertions are the facts that the ontology requires and they are put forward to the user (in most cases ch experts). they are formulated through predefined knowledge inside the ontology describing its individual components. the system asks for further assertions about the knowledge of the object and other environmental or project-oriented issues in order to give a recommendation as to which technology should be considered due to its optimal suitability. fig. 2 displays a user interface simulation that recommends structured light 3d scanning for recording waterlogged wooden samples with the purpose of evaluating possible geometric alterations. as mentioned above, however, ch applications encompass not only analysis of spectral or spatial data but also very often focus on visualization of data. a wide variety of visualization types (interactive, animation, graphical etc.) exist for 3d and/or spectral data representing existing physical ch objects, reconstructions of only partly preserved ch objects, and 3d reconstructed models. whether coschkr should also address these manifold visualization types, especially the implementation of visualization applications for digital 3d reconstructed models [pfarr-harfst and wefers 2016], is a challenge (see below). figure 2. simulated user interface for knowledge assertion of waterlogged wooden samples. 4. discussion and conclusions as regards ch applications which focus on visualization, the big difference from other ch applications so far implemented in coschkr (see geometric alteration case study described above) is that non-existing or only partly existing physical things are digitally visualized in their complete and/or former condition. an example is a project focused on the scholarly digital 3d reconstruction of the byzantine city of ephesos [grellert et al. 2010]. this visualization is characterized by the fact that the entire city, including its location in the landscape, its city-walls, streets, squares, buildings and monuments, is reconstructed based on the outcomes of archaeological excavations (foundations of buildings and monuments with associated but isolated building blocks and other construction parts, small finds found within the buildings, etc.) and their scholarly interpretation [e.g.,mangartz 2010; pülz 2010; wefers 2015]. all physically existing finds and features of the excavations are real evidence; their spatial and spectral information can be used for modeling without contradiction. however, the interpretation of these finds and features, which means putting them into a context, such as a room or object and ascribing them to a specific spatial position within a room or object, is a hypothesis. the validity of such hypotheses differs depending on the preservation condition and/or completeness of the physical object and the level of information provided by other sources. based on this, hypotheses can be classified by a number of levels and implemented into coschkr. however, the granularity of these levels still needs to be determined and cross-checked with other approaches. for in stance, kuroczyński et al. [2015] set-up nine levels of hypotheses to be able to classify the scholarly content of a digital 3d reconstructed model. for this purpose, they define a hypothesis as a combination of the level of information provided by the source and the level of detail displayed in the digital model. due to the purpose of coschkr and as described above, the level of hypothesis has to be defined differently. however, to allow a linkage of both concepts it might be of benefit to either have the same granularity or at least reflect the granularity of kuroczyński et al. [2015] in a possiblyhigher or lower granularity of coschkr. implementing such kinds of levels of hypotheses into coschkr would allow us to give recommendations for the required involvement of a ch expert during the 3d modeling process. the level of hypothesis has impact on the 3d modeling workflow: that is, a higher level of hypothesis requires more and frequent involvement of the ch expert during the 3d modeling process. for example, in the project focusing on the scholarly digital 3d reconstruction of the byzantine city of ephesos, a water-powered stone sawing machine was included. evidence for the stone-sawing machine was found within one room of terrace house 2, but actually not a single construction part of the machine itself is preserved. only supports, postholes, chutes, the waterwheel raceway, and stones with cut-marks are preserved. this evidence, together with the expert's knowledge about stone-sawing machines and further contextual information, such as time of application, gave reason to set up the hypothesis that a water-powered stone-sawing machine was constructed in this room of terrace house 2 [mangartz 2010]. together with ground plots and sections of the preserved tangible ch 2d reconstruction, drawings were prepared as a basis for the digital 3d reconstruction. especially during the digital 3d reconstruction of the machine, more detailed descriptions were requested by the 3d modelers (e.g., concerning the construction of the push rod suspension, extender wheel, frame saws, suspension of the frame saws, water wheel, mounting of the water wheel, etc.). it would have been advantageous for the whole reconstruction project if this verification process could have been planned at the very beginning. besides the above-described implementation of the level of hypotheses, further information about the object to be modeled and the targeted application would be required from the user as input, in order to be able to give more detailed support. however, the biggest challenge related to ch applications of visualization is that they are very different from each other and rarely share similarities. each case needs to be planned, implemented and applied independently. this is because ch visualizations depend on hypotheses provided by ch experts' interpretations. these hypotheses affect data processing steps in varying degrees. every hypothetical interpretation needs to be independently and carefully handled during the data processing, which is in this case the digital 3d reconstruction or modeling. this is not the same as in the case described above (geometric alteration), which deals with facts and evidence. data processing activities form a workflow and are used to create a digital 3d reconstruction, which is used for a visualization application. these activities are listed as relevant subclasses of the top-level class data processing. they are related through rules that link them together and determine their workflow. these rules are created on facts and thus can fail to comply when asserted through hypothetical interpretations, especially when there are individual hypotheses for individual visualization cases. such lapses can/will alter/break the sequence in the workflow. one of the greatest challenges within coschkr is including hypotheses and linking them to data processing tasks without interfering with the inference mechanism. we have suggested the classification of hypotheses based on their correspondence with the proven evidence. this, however, requires humanities experts' involvement in data processing steps. the balance between such involvement and their influence on adjusting parameters of the defined semantic rules within data processing activities (defined through subclasses of the top-level class technologies) is a challenge that requires further clarification through future discussion and activities. 5. acknowledgement the authors would like to thank meng. guido heinz (rgzm) for his essential input regarding the development of the technologies classes. part of this work was supported by the cost action td1201 "color and space in cultural heritage" (www.cosch.info). 6. references f. baader. 2003. the description logic handbook: theory, implementation and applications. cambridge: cambridge university press. t. berners-lee. 2006. artificial intelligence and the semantic web, 14. http://www.w3.org/2006/talks/0718-aaai-tbl/overview.html k. w. brodie et al. 2004. visualization ontologies: report of a workshop held at the nationale science centre. report e-science institute j. heflin. 2004. owl web ontology language use cases and requirements. https://www.w3.org/tr/2004/rec-webont-req-20040210 t. hofweber. 2011. logic and ontology. stanford encyclopedia of philosophy. https://plato.stanford.edu/entries/logic-ontology a. karmacharya et al. 2016. knowledge based recommendation on optimal spectral and spatial recording strategy of physical cultural heritage objects. in proceedings of the tenth international conference on advances in semantic processing, semapro venice: iaria, 49–59. m. grellert et al. 2010. ephesos byzantinisches erbe des abendlandes. digitale simulation und rekonstruktion der stadt ephesos im 6. jahrhundert. in falko daim and jörg drauschke (eds.), byzanz das römerreich im mittelalter. schauplätze. monographien des rgzm 84, 2, 2. mainz: verlag des rgzm, 241–254. p. kuroczyński et al. 2015. virtual museum of destroyed cultural heritage 3d documentation, reconstruction and visualisation in the semantic web. in virtual archaeology (methods and benefits). proceedings of the second international conference held at the state hermitage museum 1-3 june 2015. saint petersburg, 54–61. f. mangartz. 2010. die byzantinische steinsäge von ephesos. baubefund, rekonstruktion, architekturteile. monographien des rgzm 86. mainz: verlag des rgzm. c. mazzola. 2009. what to do with "large quantity finds in archaeological collections" – a kurproject. news in conservation 6 (december 2009), 6. c. metral et al. 2012. an ontology of 3d visualization techniques for enriched 3d city models. in usage, usability, and utility of 3d city models–european cost action tu0801 (p. 02005). edp sciences. a. m. pérez et al. 2010. an enhanced visualization ontology for a better representation of the visualization process. in international conference on ict innovations. berlin, heidelberg: springer-verlag gmbh, 342–347. m. pfarr-harfst and s. wefers. 2016. digital 3d reconstructed models structuring visualisation project workflows. in m. ioannides et al. eds. euromed 2016, part i, lncs 10058 544–555. doi: 10.1007/978-3-319-48496-9_43 m. pfarr-harfst. 2016. typical workflows, documentation approaches and principles of 3d reconstructions. in m. pfarr-harfst et al., eds. how to manage data and knowledge related to interpretative digital 3d reconstructions of cultural heritage? berlin, heidelberg: springer-verlag gmbh, 32-46. r. potter and h. wright. 2006. an ontological approach to visualization resource management. in international workshop on design, specification, and verification of interactive systems. berlin, heidelberg: springer-verlag gmbh, 151-156. a. pülz. 2010. das sog. lukasgrab von ephesos: eine fallstudie zur adaption antiker monumente in byzantinischer zeit. forschungen in ephesos 4, 4. wien: verl. der österr. akad. der wiss. g. shu et al. 2008. bringing semantics to visualization services. advances in engineering software, 39, 6, 514–520. v-must. 2013. virtual museums by interaction technology. http://www.v-must.net/virtual-museums/categories/interaction-technology m. voigt and j. polowinski. 2011. towards a unifying visualisation ontology. technische berichte technische universität dresden, fakultät informatik 2011, 01. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-67559. s. wefers. 2015. die mühlenkaskade von ephesos. technikgeschichtliche studien zur versorgung einer spätantiken bis frühbyzantinischen stadt. monographien des rgzm 118. mainz: verlag des rgzm. s. wefers et al. 2016. development of a platform recommending 3d and spectral digitisation strategies. virtual archaeology review 7, 15, 18–27. received october 2016; revised september 2017; accepted november 2017. musint ii: a complex project on a virtual and interactive museum involving institutions in florence, rome and heraklion anna margherita jasink, cristian faralli, and panaiotis kruklidis, university of florence, italy musint ii is part of a more general project on a series of virtual and interactive museums, using traditional and new technologies with the aim of reaching a wider audience. an interdisciplinary methodology provided by the participation of archaeologists, architects/designers and computer scientists makes the project rich in attractive solutions for visitors of different levels. musint ii uses a sophisticated structure, a series of 3-dimensional models produced with both photogrammetry and laser scanning. a complex database with many interconnected queries was implemented to make the study of a large number of objects more efficient and to offer truly innovative research responses with effortless data processing. new hyper-realistic techniques are used to best illustrate the reconstruction of buildings, objects and scenes of life. a specific educational section is addressed to young people, with all these new techniques applied in a winning way. the main object of musint ii is a specific category of small objects, sealings and seals, coming from the excavations at haghia triada carried out by the italian archaeological expedition in crete at the beginning of 1900. our purpose is to offer a new analytic and, at the same time, synthetic vision, addressed to a wide audience, of the historical and archaeological representation of one of the most important sites of minoan crete. key words: interactive museum, aegean civilizations, technological methodologies sdh reference: anna margherita jasink et al., musint ii: a complex project on a virtual and interactive museum involving institutions in florence, rome and heraklion, 2017. sdh, 1, 2, 1 pages. doi: 10.14434/sdh.v1i2.23192 1. introduction the development of new technologies and their application in the general field of cultural heritage has opened new perspectives also in museology [economou 2006; hermon and nicolucci 2007; oberländer-târnoveanu 2008; epoch project at http://epoch-net.org/site/] to such an extent that the very idea of the museum has evolved from a traditional static form, conceived for the direct observation of artifacts to a more dynamic and interactive concept that goes beyond the single site exhibition. the individual traditional museum has been digitally duplicated to make it accessible online. exploiting advanced digital communication tools, it is possible to go beyond the simple "replica" of the museum to show the intrinsic properties of the artifacts, hidden aspects not visible at first sight, such as the original geographical or historical location of their creation and other information accessible through dedicated databases, in some cases with recorded interviews with curators. these important advances are still confined in the domain of a static vision of the museum, although with a delocalized access; the main function of the interaction is to offer the possibility of browsing through collections and choosing which item to view. the best examples are obviously those of the british museum and of the louvre, with huge collections and various databases available online, educational content for children and special focus on selected items. the virtual museum [djindjan 2007; mancini 2008; antoniou et al. 2016] author's address: a. m.jasink, university of florence, via san gallo 10, florence, italy; email: jasink@unifi.it; c. faralli email: cristianfaralli@gmail.com; p. kruklidis email: panaiotiskruklidis@gmail.com permission to make digital or hardcopies of part or all of this work is granted without fee according to the open access policy of sdh. © 2017 sdh open access journal can arise as an autonomous digital entity ensuring on the one hand general free access on the web and on the other hand enhancing the traditional museum experience through personalization, interactivity and richness of content. this has been accomplished by full recourse to new technologies [stanney & hale 2014]. it is quite evident that the virtual museum is particularly suited for applications in archaeology [moscati 2007; artusi et al. 2010]. new imaging methods [remondino 2011] allow for an augmented and more significant representation of the artifacts. in addition, the virtual museum provides the audience with extended and integrated information on the site of the finds, their history and their cultural context. finally, it allows the simultaneous exhibition of finds stored in different museums or locations, resulting in a more complete understanding of materials. to explore these and other applications for archaeology is the aim of the virtual museum transnational network vmust (2011-2015) (http://www.v-must.net/). this paper deals with musint, the first application of the virtual museum to aegean archaeology (bronze age greece). so far, this sector of studies has been variously illustrated through augmented reality techniques: see, e.g., the reconstruction of hall 64 of the pylos palace, messenia (http://classics.uc.edu/prap/hall64.html). the most important bronze age findings from the largest museum collections of greece are now available online, thanks to the latsis foundation museum cycle (http://www.latsis-foundation.org/eng/education-science-culture/culture/themuseums-cycle). however, this is simply the online transposition of actual printed catalogues, without any possibility of interaction. no aegean site or project figures in the v-must network, though various examples from classical greece are represented. actually, a proper virtual museum is still missing and musint is the first attempt in this direction. this is why, in recent years, we developed a research project (musint) on virtual and interactive museums of aegean civilizations with the goal of reaching a wide audience, combining traditional methods and new technologies [jasink, tucci and bombardieri 2011]. the virtual museum is accessible on www.aegean-museum.it or www.dbas.sciant.unifi.it website (www.aegean-museum.it/musint2/it for musint ii). the project includes the joint contribution of archaeologists and historians, to define the subject-matter and to write accurate texts; of architects/designers, to ensure enhanced standards of pictorial and graphic representation; and of computer scientists, to guarantee a smooth and effective connection between content and display. the intrinsic multi-disciplinary approach made the project attractive to different visitors due to the variety of available pathways. in a previous chnt meeting, an application of musint to the teaching of ancient history in primary schools was discussed [dionisio and jasink 2016]. in the present paper, musint ii, a major addition to the general project, is described. technological methods and interactive devices have been more extensively exploited to present specific historical and archaeological topics at a high scientific level on the one hand and for a non-specialized audience on the other. musint ii is accessible on the websites given above. the core of the new project is the collection of small clay objects from the neopalatial villa of haghia triada, one of the main cretan sites, which flourished in the middle of the 2nd millennium bc. this is the period of full development of the minoan civilization. while the topic of the new project is more limited, the potential audience is definitely wider. in fact, the new museum is structured in two sections, the first addressed to scientists and curious adults and the second with explicit educational aims. the cretulae or nodules we are dealing with represent, together with the tablets, the administrative documents of the period and may bear both sealings and signs carved in linear a script. these small objects were discovered at the beginning of the 1900s during excavations by the italian archaeological expedition in crete, specifically devoted to the phaistos and haghia triada sites. the majority of these finds remained in crete, in the heraklion archaeological museum, while others were taken to italy and are presently stored in the national archaeological museum of florence and in the prehistoric-ethnographic pigorini museum in rome, the latter named after its founder luigi pigorini (see fig. 1). figure 1. home page of musint ii these documents have been repeatedly analyzed since their discovery. the novelty of the present project is to make the items from the three locations simultaneously available in the single virtual museum of musint ii while retaining their individuality. this is not only a useful and simplified tool for researchers but it is also a unique opportunity for other visitors to get a full idea of the minoan administrative system. all web pages are characterized by a common graphics with rich and attractive details. the page layout is designed with a full colored background, with communicative images and animations. the color palette is decided in advance and an independent style for each page is obtained with appropriate colors. photographs, drawings and reconstructions recall the style of the general musint project and move on two parallel but different tracks, as required by appropriate approaches to scientific and educational sections. 2. haghia triada the haghia triada webpage, from which five different items can be explored, is shown in figure 2. fig. 2. home page of the "haghia triada" website haghia triada history. various phases of the archaeological site, which developed throughout the bronze age (prepalatial, protopalatial, final/monopalatial, postpalatial periods), can be explored, with particular attention to the neopalatial period, to which the sealed documents refer. excavations. this page is concerned with italian archaeological missions in crete and the story of the protagonists of this adventure is also briefly described. reading the information provided has been improved and made easier by using the so-called "popup windows" written in javascript. this programming language is considered one of the most powerful tools for improving a website with dynamic and interactive applications. javascript code is perfectly embedded in htlm-php languages normally used to build websites. javascript popup windows are successfully used in the musint ii website to open pages containing biographies with pictures and other images, without compromising the understanding of the text [faralli 2016]. documents. administrative documents are described, including tablets and sealed brief inscriptions. the latter documents, in particular the florentine and roman pieces, are the core of musint ii, as will be seen in the following. they are small clay pieces with an impressed seal on one side and, often, a symbol in linear a script on another face, as shown in figure 3. database. this database is one of the most innovative aspects of musint ii. the relational database management system used is mysql. this system is very useful for managing dynamic websites. mysql is fully supported by most of the programming languages, such as php. the interaction between the database and the web pages is achieved by using the sql language to construct the queries and to recover the data. this architecture has been successfully used in a well-known application in the aegean field [arachne: http://arachne.uni-koeln.de/], which concerns the whole corpus of minoan and mycenaean seals. along the same lines we have already been able to enter into an extremely more elaborate analysis of a specific sector of the seals [dbas chs cretan hieroglyphic seals: http://www.sagas.unifi.it/vp-394-dbas-chs-cretan-hieroglyphic-seals.html], to sort out useful statistical information. gallery. photos, maps and drawings complete the haghia triada section. figure 3. typologies of the administrative sealed documents from haghia triada (modified from hallagher 1996) the present database improves considerably the study of the haghia triada sealed documents. first of all, it gathers in a unified digital catalogue the information on the seals and on written signs, which have been separately published in the basic volumes cms ii 6 and gorila 2, respectively. in addition, it allows extensive and overlapping crosscorrelations that can lead to new interpretations of both seals and written signs. dedicated files to single cretulae with a stamped seal and, very often, with carved script signs, constitute the starting point for entering into new and diversified routes by means of distinct but connected queries, which try to answer as many questions as possible. this relational database allows users to consult the information about shape, motifs, inscriptions and so on of single objects and to make queries to correlate such information; it also provides statistical data (percentages, recurrences, and so on). a more detailed discussion of the structure of this database will appear in a forthcoming paper by alberti, faralli and jasink. this specific project on materials that have already been widely analyzed has been conceived not as a mere instrument but as a scientific study capable of contributing to our knowledge of the administrative system, of the figurative arts and of the daily life of the minoan neopalatial society, at least in haghia triada. 3. the italian museums in the present section, we consider the web pages (shown in figure 4) of the museums in florence and rome that have been worked out with equal criteria but show some differences in two of the three sub-sections. a significant difference is found in the "collections." the roman one consists solely of materials from crete, while the florentine collection includes the whole aegean world crete, greece, cyclades, rhodes and, in addition, cyprus including neolithic and bronze age objects as well. the pigorini museum of rome received many more objects from the italian mission in crete than the florentine museum. but in florence, the director of the archaeological museum, luigi adriano milani, in the years between the end of the 19th century and 1910, was in contact with various merchants and archaeological institutions, and bought or traded objects which enriched the florentine "aegean collection." in florence, therefore, materials from haghia triada are not as abundant as in rome [jasink and bombardieri 2009; jasink, tucci and bombardieri 2011; musint website]. figure 4. web pages of the two italian museums with minoan collections this explains the difference of the third item introducing the web pages on the two museums: "more documents from haghia triada"for rome and "seals from crete"for florence. the first includes documents (three tablets and one pithos), which differ from cretulae, but still come from haghia triada and may be considered as administrative and written sources. the second is made up of a series of seals that came to the florentine museum through different routes and range from the minoan prepalatial to the last mycenaean period (figure 5). these have been included in musint ii to complete the italian collection of aegean seals, theseals are presented in musint ii as 3d models, in the same way as the sealings. a more detailed discussion about 3d structure and the photogrammetry methodologies is dealt with in another paper in this same volume [marziali and dionisio 2017] and in other articles [pitzalis et al. 2007; sotoodeh et al. 2008]. figure 5. examples of seals of the aegean collection in florence as discussed above, the core of musint ii is represented by the files made available in the sub-section "sealed documents from haghia triada," which has the same structure for the two museums. a dedicated file, containing a descriptive table with detailed information and, whenever possible, a 3d image, has been uploaded for each object (figure 6). for the descriptive tables, reference can also be made to the general haghia triada database. figure 6. example of a file concerning a "hanging nodule" from haghia triada the 3d images, created either with photogrammetry methodologies (florentine seals) or with laser scanning acquisition (florentine and roman sealings) [remondino 2011] and uploaded on the sketchfab platform, have been transferred in musint ii. to this purpose, a pro account has been created on the sketchfab platform in order to make a profile page available with the necessary space to implement and visualize the realized 3d models. for each 3d model, three files have been uploaded (a jpg texture file, an obj model file and a mtl material file) to compose a single compressed folder. this reduces the overall dimensions, thus facilitating the uploading in the website. the web pages related to the 3d models have been customized in order to obtain a better interaction between the sketchfab service and the musint ii website. html/php code has been embedded to recover information from the remote archives. at the same time, a series of tailored database tables have been constructed to manage the amount of data coming from sketchfab. finally, sql code has been written to allow these data to be read and shown in the website [dionisio et al. 2016]. tools for processing and editing of 3d meshes as meshlab can be also found in the epoch project results, together with a number of documents and publications about technological strategies to improve the cultural heritage sector [mcloughlin 2007]. 4. educational section in the educational section (figure 7), initially in italian, images and contents play a basic role, with the goal of raising an immediate interest in the younger audience. charming hand-made drawings have been created with traditional techniques and with the support of computer graphics. six different webpages may be opened in this section. videos, games and storytelling are proposed to the audience, in a mix that has been variously experienced by other projects [danks et al. 2007; ripanti 2015; antoniou et al. 2016; mcloughlin et al. 2007; the chess project at http://www.chessexperience.eu/; the emotive project at http://emotiveproject.eu/]. figure 7. home page of the educational section videos. five videos have been inserted here. the files are directly uploaded in the server where the website is running in order to avoid banners or advertising from third-party services. despite the small amount of disc space, the videos are well balanced, with good resolution and lag-free streaming through the internet. the