Jtam.dvi JOURNAL OF THEORETICAL AND APPLIED MECHANICS 48, 3, pp. 733-750, Warsaw 2010 INTERACTIVE MULTIBODY SIMULATION IN AUGMENTED REALITY1 Pier Paolo Valentini Eugenio Pezzuti University of Rome ”Tor Vergata”, Department of Mechanical Engineering, Rome, Italy e-mail: valentini@ing.uniroma2.it In this paper, the authorsdiscuss amethodology to enhancemultibody systems simulationsusingAugmentedReality (AR) implementation.TheARdealswith the use of live video imagerywhich is digitally processed and augmentedby the additionof computer generatedgraphics.Thepurpose is to illustratehowrecent developments in computer-aideddesign and augmented reality can improve the realism and interactivity when simulating the movement of digital mock-ups. Thepaperdiscusseshardwareand software implementations andanoverviewof several illustrative examples. The basic idea is described starting from a simple simulation of a falling body subjected to gravitywith the initial conditions set interactively by the user. Then, a more complex interactive simulation of the kinematics of a robot whose end-effector can be grabbed and moved by the user is presented. Finally, the real time dynamic simulation of a slider crank mechanism isdiscussed.The integrationbetweenARandmultibody simulation has revealed to be very useful for didactical purposes and collaborative design. Key words: interactive simulation, augmented reality, multibody 1. Introduction During the last two decades, worldwide research onmultibody kinematics and dynamics has produced a lot of very interesting results. Exploring new formu- lations and implementing a better way to build and solve equations of motion have been some of the investigated topics.Moreover, a great improvement has involved thewayof simulating and controlling very complex systems, including multidisciplinary aspects. At present, it is possible to simulate very complex 1 The paper was presented at the ECCOMAS Thematic Conference on Multibody Dynamics which was held atWarsaw University of Technology on June 29–July 2, 2009. 734 P.P. Valentini, E. Pezzuti and large systemswith a high level of reliability, requiring affordable hardware resources. Due to these important improvements about simulation techniques, the designer has to define a lot of input parameters and interpret a lot of out- put results. Complexmodels includemany bodies, a lot of constraints, maybe 3Dcontacts andare able to reproducevery complex three-dimensionalmotion. A considerable support to themanagement of pre- andpost-processing of data has been offered by the development of Computer Aided Engineering (CAE) environments (Bernard, 2005). The development of hardware and software ca- pabilities has supported the integration between modeling environments and multibody solvers. Moreover, many modeling environments can interpret the mating relations in assembling, translating them into constraint equations for multibody simulation. After (or during, in some cases) the simulation, it is possible to use the computer graphics capabilities in order to visualize reali- stic animation in order to have an idea about the behavior of the systems at a glance. It is useful for debugging the implementation of the system and correcting some evident errors without screening a lot of numerical data. The simplification in building and reviewing models has boosted their usage in industry where saving time andmoney is very important. The support of computer graphics to multibody simulation has been use- ful also for didactics. Teachers can prepare realistic 3D multibody models to explain the functioning of complex mechanisms, being more effective on the students. Presently, CAEapplications support designers and teachers through the communication of results coming from numerical computations. By using these instruments, engineers and designers can develop their creative ideas in front of a computer monitor using the mouse and keyboard. Although the integration between numerical computation and graphics leads to the genera- tion of photo-realistic digital mock-ups, they are still far from the real context and the user has a limited interaction with them.This limitation can generate problems (non-conformities, unexpected behavior and appearance, etc.) when the designed products have to be integrated in the real world. For overcoming this disadvantage, the development of new instruments, based on the mixing between the real world and virtual objects, seems to be the future of CAE. One of these instruments is the Augmented Reality. 2. Augmented reality TheAugmentedReality (AR) is an emergingfield of the visual communication and computer technologies (Azuma, 1997; Azuma et al., 2001; Bimber and Raskar, 2005). Itdealswith thecombinationof realworld images andcomputer Interactive multibody simulation... 735 generated data. At present,mostAR research is concernedwith the use of live video imagery which is digitally processed and ”augmented” by the addition of computer generated graphics.With anAR system, the user can extend the visual perception of theworld, being supported by additional information and virtual objects. These objects do not remain inside a computer monitor but they are integrated in the real world. The AR system has some similarities with the Virtual Reality (VR) one. Themain difference is that in the VR the perceivedworld is fully virtual (generated by one ormore rendering pipelines), while in the AR the virtual world merges the real one. The level of details of the augmented scene has to be very realistic in order to give the user the illusion of a unique real world. The ways to capture images from the real world, process and project again to the user can be different (Vallino, 1998). The most implemented and promising technology for engineering purposes is the video see-through system. As illustrated in Fig.1, it is based on the use of one or two cameras which acquire an image stream from the real world. The stream is processed by a computer which adds the virtual content, producing an augmented image streamwhich is projected again to the user bymeans of a blind visor. Fig. 1. AR video see-through system Scientific literature reports an increasing interest for the development of applications of augmented reality in many different fields. The AR has been used inmedicine, surgery,militaryfield, implantandcomponentsmaintenance, robotics, and architecture. Most of these applications deal with the merging 736 P.P. Valentini, E. Pezzuti in the real world of objects, scene and animation which have been modeled and simulated outside the system. It means that the user perceived a real scene augmented with pre-computed objects. His interaction with them is often limited to exploration. A good interaction with the scene may improve the impact of such technology (Klinker et al., 1999). Recent contributions in the scientific literature report and increasing interest in the development of engineering tools in augmented reality environment (Stilman et al., 2005; Valentini, 2009; Valentini et al., 2008, 2009). Fig. 2. The implemented AR system For the specific purpose of this investigation, the implemented AR sys- tem (see Fig.2) is comprised of an input video device Microsoft LifeCam VX6000 USB 2.0 camera, able to catch frames up to 30Hz with a resolu- tion of 1024×768 pixels. The camera has been rigidly mounted on an Head Mounted Display equipped with OLed displays (Z800 3D visor by Emagin – http://www.3dvisor.com/). It is able to support a resolution of 800×600 pi- xels for each eye. The processing unit is a personal computer with an Intel Core 2 Quad-core processor, 3 GB RAM and a NVidia Quadro FX3700 gra- phic card. The operating systemwasWindowsXP and the development suite for programming was Microsoft Visual Studio 2003. Routines for image processing have been developed using the well known open source library named ARToolkit which can be freely downloaded from http://sourceforge.net/project/showfiles.php?group id=116280. It comprises a set of numerical procedures which are able to detect and recognize a planar patternedmarker in a video stream in real time. Using correlation techniques, the routines arealso able to compute the relativepositionandattitudebetween markers and camerawith goodprecision.This computation is necessary for an Interactive multibody simulation... 737 accurateperspectivecollimationbetween thevirtual entities andthereal scene. The details about specific implementation and about the contents of the libra- ry go beyond the scope of this paper, and the interested reader can find useful material at the internet site http://www.hitl.washington.edu/artoolkit/. 3. Implementing multibody simulation into augmented reality environment Starting from the description of the capabilities of theAugmentedReality, the next question is ”how can theAR supportmultibody simulations?”. There are two possible answers. The first is about the possibility to project on the real world the results coming fromapre-computed simulation. It concerns the ren- dering on the scene of all the objects involved in the simulationwhose position is updated according to the results of the simulation. This implementation is similar to that of the commonpost-processing software for visualizing graphics results. The only difference is in the introduction of the simulated system in the real world. The advantage is to perceive the interaction with the real world and check working spaces, possible interferences, etc. Although it can be useful, this approach does not use all the potential of AR. A smarter way to enhance the multibody simulation is to introduce interactivity. It means that the user does not only watch the augmented scene, but interacts with it. Interaction can concern both the definition of boundary conditions and initial parameters and the real time control of the simulation. In all these cases, the solution of the equations of motion has to be computed in real time in order to populate the scene with quickly updated information. In order to enable communication between the user and the scene, the scene has to contain some specific sensors. Since the implementations of AR are based on the image acquisition and processing, themost simple sensor is a flat patternedmarker. Starting from these considerations, a genericmultibody simulation in augmented reality canbe implemented following fourmain steps: 1. Before the simulation starts, the geometries and topological properties (joints and connections) have to be defined (as for any multibody sys- tem); 2. Thereal scenehas to contain information for collimating the realworld to the virtual objects and the virtual sensor(s) for the interactive action of the user (i.e. has to contain an adequate number of patternedmarkers); 738 P.P. Valentini, E. Pezzuti 3. Duringeach frameacquisition, themultibodyequationshave tobesolved in order to compute the correct position of all the virtual bodies in the scene, taking into account the information coming from the sensors; 4. For each frame acquisition, virtual objects have to be rendered on the scene in the correct position and attitude. The rendering of geometries can be implemented using OpenGL capabili- ties. In order to use quite complex shapes, all the geometries can bemodeled in .vrml (ormodeled in aCADand then exported in .vrml). In this case, ano- ther coded library can be useful, the OpenVrml, that is able to manage the OpenGL rendering of .vrml contents. The details about OpenVrml goes bey- ond the scope of this paper and the interested reader can find useful material at the internet site of such projects (http://openvrml.org/). A scheme of the workflow for the entire simulation is depicted in Fig.3. Fig. 3. Tasks for implementing multibody simulation in augmented reality Between the input procedures (data acquisition and markers recognition) andtheoutputprocedures (renderingof thevirtual objects on the input scene), themultibody computation has to be interposed. Since it has to be performed between the acquisition of one frame and the next one, all the equations need a real-time solution. For this purpose, it is useful to optimize the solution stra- tegy, and kinematic simulations aremore suitable because involve the solution Interactive multibody simulation... 739 to a systemofnon-linear equations, insteadof a systemofdifferential-algebraic equations (Garçıa de Jálon and Bayo, 1994). In order to explain in details all the steps required to implement a mul- tibody simulation in an Augmented Reality environment, three examples are described next. 4. First example: a bouncing ball The first example deals with the simulation of a rigid sphere that falls sub- jected to gravity and bounces on a plane. The initial condition of motion of the body has to be chosen by the user interactively. The simulation needs two patterned markers. The first defines the position and attitude of the world reference frame. It is useful for computation of the sphere reference frame and for the location of the rigid plane which the ball bounces on. The second marker can be reviewed as a communication sensor. The user’s intent can be interpreted by tracking the position and velocity of the point P on this mar- ker. The position of point P can be computed starting by recognition of the marker in the scene, computing its homogeneous transformation matrix with respect to the camera T2−camera. The generic transformationmatrix T is a 4×4matrixwhich expresses the relative position and attitude between two reference systems. The first 3×3 portion of this matrix is used to define the relative orientation between the two reference frames. The last column is used to describe the relative position between the origins of the coordinate frames. The last row of the matrix is [ 0 0 0 1 ] T= [ [Orientation]3×3 [Position]3×1 0 0 0 1 ] (4.1) The relative position between marker 2 and marker 1 can be computed by multiplication of the transformation matrices, and the position of point P with respect to marker 1 (world) reference frame can be computed as P1 =T −1 1−camT2−camP2 (4.2) where P i = {x,y,z} ⊤ i – position vector of point P, expressed usingmarker i; Ti−cam – transformation matrix between marker i and camera, expressed as in Eq. (4.1). 740 P.P. Valentini, E. Pezzuti Concerning the velocity of the point P, it can be estimated from compu- tation of the position in two consecutive frames as Ṗ1 =T −1 1−camT2−camṖ2 =T −1 1−camT2−cam ( P fri 2 −P fr (i−1) 2 1 framerate ) (4.3) where Ṗj = {vx,vy,vz} ⊤ j – velocity vector of point P, expressed using marker j reference frame; P fri j – position vector of point P, expressed using marker j computed at frame i; framerate – frame rate of video stream [frame/s]. After computation of initial conditions, the simulation can start. The user can chose this event using the keyboard or mouse as trigger. After the begin- ning of the simulation, the position of the sphere in the space cannot be chosen by the user, but it is computedby integrating the equations of dynamics. Since the computation and the rendering have to be real time processed, an explicit integration procedure has to be preferred. Assuming the gravity acting along the world – y direction, for the free falling of the sphere the equations are      vx vy vz      fri =      vx vy −g∆t vz      fr(i−1)      px py pz      fri =      px+vx∆t py+vy∆t−g∆t 2 py+vy∆t      fr (i−1) (4.4) where pfri – position vector of the center of the sphere at frame i; vfri – velocity vector of the center of the sphere at frame i. The detection of the contact with a generic plane can be computed using a proximity condition on the distance between the ball and the plane as Contact→|d|sph-pl−Rsph ¬Tolerance (4.5) where Rsph – radius the sphere; |d|sph-pl – modulus of the distance between the sphere and the plane. Interactive multibody simulation... 741 When Eq. (4.5) is fulfilled, the integration of the equation of motion has to be stopped, imposing the impact condition for computing the new initial conditions vfrinpl =−ev fr (i−1)npl (4.6) where npl – unit vector normal to plane; e – restitution coefficient of impact (scalar). Figure 4 summarizes four snapshots taken during the simulation. The tra- jectory of the center of the ball has been tracked using small spheres in order to visualize the sequence of bounces and the effect of restitution coefficient. Fig. 4. Four snapshots from bouncing ball simulation 5. Second example: inverse kinematics of a spatial manipulator The second example deals with the simulation of a spatial manipulator.With reference to Fig.5, let us consider a mechanism with 4 rigid links connected 742 P.P. Valentini, E. Pezzuti with 3 revolute joints and 1 spherical joint. According toGrüebler’s count, the manipulator has 6 degrees of freedom dof =6nlink− joints ∑ i=1 (6−fi)= 6 Itmeans that in order to define in a uniqueway the spatial position of the manipulator we have to prescribe 6 independent parameters (i.e. position and attitude of the end-effector). For an interactive inverse kinematics simulation, it means that the user can freely choose the position and attitude of the end effector and the Augmented Reality scene has to be able to include such a 6 d.o.f. sensor. The first step in building the model is the construction of geometries of each link that can be stored in a .vrml file. The second step is about the preparation of the scene.We need a marker (marker 0) to define the position and orientation of the manipulator world coordinate system (i.e. its position inside the scene) and another marker (marker 1) to define the position and the orientation of the end-effector that will work as an active sensor (Fig.5, on the right). Fig. 5. The manipulator of the second example (nomenclature and topology on the left, reference frames and patternedmarkers on the right) The third step is about the implementation of the system of constraint equations. It is useful, for the subsequent graphical operations, to use a 4×4 homogeneous transformation matrix T to express the relative position and attitude between two generic reference frames. The first 3×3 portion of this matrix is used to define the relative orientation between the two reference Interactive multibody simulation... 743 frames. The last column is used to describe the relative position between the origins of the coordinate frames. The last row of the matrix is [ 0 0 0 1 ] T= [ [Orientation]3×3 [Position]3×1 0 0 0 1 ] Looking at the tip point P on link 2 (center of the spherical joint), we can deduce its position with respect to marker 0 as Pmarker 0 =T0−l0Tl0−l1Tl1−l2P link 2 (5.1) where Pmarker 0 –position vector of point P inmarker 0 (world) coordinate system; P link 2 – position vector of point P in link 2 (local) coordinate system; T0−l0 – homogeneous transformation matrix between the base (considered as link 0) andmarker 0. It is a function of the parameter αwhich describes the relative rotation at their relative revolute joint; Tl0−l1 – homogeneous transformation matrix between link 1 and the base (considered as link 0). It is a functionof theparameter βwhichdescribes the relative rotation at their relative revolute joint; Tl1−l2 – homogeneous transformation matrix between link 2 and link 1. It is a function of the parameter γ which describes the relative rotation at their relative revolute joint. Looking at the same point P but on the slider, since the slider is attached to marker 1, we can deduce its position with respect to marker 0 as Pmarker 0 =T0−1Pslider (5.2) where Pslider – position vector of point P in slider or marker 1 (local) coordinate system; T0−1 – homogeneous transformation matrix betweenmarker 1 andmarker 0. It is a function of 6 independent parameters which describe the relative position and the relative rotation between the twomarkers. These para- meters can be considered as the input of the inverse kinematic analysis because can be freely chosen by the user to move the manipulator. 744 P.P. Valentini, E. Pezzuti Sinceat thepoint P link1 is connected to the slider bymeansof a spherical joint, we can obtain the closure loop equation of the mechanism as T0−l0Tl0−l1Tl1−l2P link 2−T0−1Pslider =          0 0 0 1          (5.3) The system of equations (5.2) can be solved for the unknown kinematic parameters α, β, γ starting from the knowledge of the position and attitude of marker 1 (and the end-effector slider) with respect to marker 0. In order to compute this information, we have to know the relative transformation between marker 1 andmarker 0. ARToolkit deals withmatrices which are similar to the homogeneous ones. They are 3× 4 transformation matrices, containing information about the relative position and attitude as the homogeneous ones but without the last dummy row.Quaternions and the position vector can be extracted from these matrices by using arUtilMat2QuatPosArtoolkit procedure. The relative position of marker 0 (Tc0) andmarker 1 (T c 1) with respect to the camera can be computed using the Artoolkit procedure arGetTransMat. Their inversematrices (T0c and T 1 c, respectively) represent the relative trans- formations between the camera and themarkers. The relative transformation between marker 1 andmarker 0 can be computed as T 1 2 =T c 2T 1 c (5.4) This relation is used to compute the vector d and the vector a2 in the world reference frame. By doing this, Eq. (5.2) can be solved for the unknown angles. At this point the virtual position and attitude of each link of the me- chanism are known. The next step is to compute the projection of geometries in the augmented scene. Since the renderer engine isOpenGL,we have to deal with theModelView projectionmatrix whichmaps the 3D points coordinates into 2D (scene) coordinates. The first step in the rendering concerns the load of the projectionmatrix computed from theperspective ofmarker 0.This task can be performed by using arglCameraViewRHwhose output is a vector of 16 elements containing the transformation and scale information to project the object from the 3D coordinate system ofmarker 0 into the camera view plane. This transformation can be used to relate the position and attitude of each object according to the virtual world coordinate system (marker 0). The next steps concern the computation of the transformations in order to draw the manipulator links in the scene, using the OpenGL operators glRotated and Interactive multibody simulation... 745 glTranslated. The first procedure performs a rotation of an angle about a specified axis, the second performs a translation of a specified amplitude along a specified direction. For the base, it is sufficient to perform a rotation about the z axis of marker 0 by an angle α: glRotated(alpha,0.0,0.0,1.0) For the first link, three transformations are needed: translation of a1, rotation about the z axis of marker 0 by angle α and rotation by angle β about the axis of the first revolute joint: glTranslated(0.0,0.0,a1) glRotated(alpha,0.0,0.0,1.0) glRotated(beta,0.0,-1.0,0.0) For the second link, three transformations need: rotation by angle α about the −z axis ofmarker 0, translation to the center of the revolute joint between the 1st and the 2nd links and rotation byangle γ about the axis of the revolute joint between the 1st and the 2nd links: glRotated(alpha,0.0,0.0,1.0) glTranslated(l1*cos(beta),0.0,l1*sin(beta)+a1) glRotated(gamma,0.0,-1.0,0.0) The end-effector, since it is fixed to marker 1 can be projected using the projection matrix of marker 1 without applying any further transformation. Screen shots of animation are reported in Fig.6. 6. Third example: dynamics of a slider-crank mechanism The third example deals with simulation of the dynamic behavior of a slider- crankmechanism shown in Fig.7. It comprises four links: the base, crank, rod and slider. The slider is connected to the base with a spring damper element. The example will show how the augmented reality can be used in order to interactively define the initial conditions of the simulation and review a real- time processed animation. Asdiscussed in the previous example, the first step in building themodel is the construction of the geometry of each linkwhich is subsequently stored in a .vrml file. The second step regards the preparation of the scene.We need two patternedmarkers:marker 0 todefine theposition andorientation of theworld coordinate system, andmarker 1 to define the position and the orientation of the slider that will work as an active sensor (Fig.7, on the right). 746 P.P. Valentini, E. Pezzuti Fig. 6. Snapshots from the inverse kinematics simulation of a spatial manipulator Fig. 7. Themechanism of the third example (nomenclature and topology on the left, reference frames and patternedmarkers on the right) In order to interactively solve the equations of motion, the DAE system has to be build before the simulation starts, and has to be embedded in the run-time code. Interactive multibody simulation... 747 For the example, it is useful to implement these equations in amatrix form (Haug, 1989) [ M Ψ⊤q Ψq 0 ]{ q̈ λ } = { F γ } (6.1) where q̈ – vector of generalized accelerations (q is the vector of generalized coordi- nates and q̇ – of generalized velocities); λ – vector of Lagrange multipliers; F – vector of generalized forces (which includes the contribution of the spring- damper element and gravity); Ψq – Jacobian of the constraint equations vector Ψ; γ =−(Ψqq̇)qq̇−2Ψqtq̇−Ψtt , in which the generic subscript imeans ∂/∂i; M – inertial matrix. For the proposed example we have 23 constraint equations (5 for the revo- lute joint and 3 for the planar joint) and themechanism possesses one degree of freedom. In order to be real time evaluated, the system in Eq. (6.1) has to be com- puted using an explicit integrator. The integration time has to be identical to that of the video frame rate. With reference to Fig.8, the simulation strategy is the following. As the first step, the user can manipulate marker 1, moving the slider (snapshot a of Fig.8). The equations for the real time computation of the position and attitude of each link can be obtained like in the previous example. Then, the user can interactively define the undeformed length of the spring, placing the slider and pressing a key on the keyboard or a button on the mouse. As a result, the spring is rendered on the scene (snapshot b of Fig.8). At this point, the user can move the slider, and the spring will be still attached to it and is rendered as deformed (snapshot c of Fig.8). By pressing another key or button, the user can start the simulation.When the simulation begins, the position of the slider can not be chosen anymore bymovingmarker 1, but the position and attitude of each link are computed according to Eq. (6.1). A real time animation is rendered on the scene (snapshot d of Fig.8). According to the simulation outputs, the position and attitude of the links can be drawn by using the OpenGL operators glRotated and glTranslated as shown in the previous example. 748 P.P. Valentini, E. Pezzuti Fig. 8. Snapshots from the dynamic simulation of a slider-crankmechanism 7. Conclusions The implementation ofmultibody simulation in the augmented reality reveals many attractive features. First of all, it is possible to perform simulation of a virtual object projecting the results in the real environment. By this way it is possible to check the integration between the virtual objects and the re- al ones. It is also possible to move the digital mechanism dragging intuitive sensors, instead of the mouse and keyboard, as it happens in the standard CAE environment. These simulations can be also useful for didactics purposes to support the explanation of motion of complex mechanisms, the effects of specific joints, the comprehension of working space, etc. In regards to the im- plementation details, the current level of hardware performance allows finding the real-time solutions to both kinematics and dynamics equations between the acquisition of two subsequent camera frames. All the tasks concerning the recognition of markers and the computation of view transformation matrices can be performed using Artoolkit libraries. On the other hand, all the tasks concerning the drawing of textured shapes can be performed using the Ope- Interactive multibody simulation... 749 nVrml and OpenGL libraries. All these routines are open source and can be easily personalized. The future work will focus on two aspects. One concerns the implementa- tion of numerical results (such as reaction forces at joints, velocities, energy, etc.) directly on the augmented scene. The second focuses on the integration of specific andmultipurpose real time integrators. A collaborative design (two or more people managing the same scene) and the implementation of stereo vision are other interesting aspects to be investigated. References 1. Azuma R.T., 1997, A survey of augmented reality,Teleoperators and Virtual Environments, 6, 4, 355-385 2. Azuma R., Baillot Y. et al., 2001, Recent advances in augmented reality, IEEE Computer Graphics, 21, 6, 34-47 3. BernardA., 2005,Virtual engineering:methods and tools,Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufac- ture, 219, 5, 413-422 4. Bimber O., Raskar R., 2005, Spatial Augmented Reality Merging Real and Virtual Worlds, A.K. Peters Ltd. 5. Garçıa de Jálon J., Bayo E., 1994,Kinematic and Dynamic Simulation of Multibody Systems – the Real-Time Challenge, Springer-Verlag, NewYork 6. HaugE.J., 1989,—emComputer-AidedKinematics andDynamics ofMecha- nical Systems, Allyn and Bacon, Boston,MA, USA 7. Klinker G., Stricker D., Reiners D., 1999, Optically based direct mani- pulation for augmented reality,Computers and Graphics, 23, 827-830 8. StilmanM., Michel P., Chestnutt J., Nishiwaki K., Kagami S., Kuf- fner J.J., 2005, Augmented Reality for Robot Development and Experimen- tation, Tech. Report CMU-RI-TR-05-55, Robotics Institute, Carnegie Mellon University 9. Valentini P.P., 2009, Interactive virtual assembling in augmented reality, International Journal on Interactive Design and Manufacturing, 3, 109-119 10. Valentini P.P.,GattamelataD., PezzutiE., 2008,ACADsystem inAu- gmented Reality application,Proc. of 20th European Modeling and Simulation Symposium, Track on Virtual Reality and Visualization 750 P.P. Valentini, E. Pezzuti 11. Valentini P.P., PezzutiE.,GattamelataD., 2009,Virtual engineering in augmented reality, Computer Animation, seriesComputer Science, Technology and Applications, Nova Publishing [in press] 12. Vallino J., 1998, Interactive augmented reality, PhD Thesis, Department of Computer Science, University of Rochester, USA Interaktywna symulacja wielobryłowa w Rzeczywistości Poszerzonej (AR) Streszczenie W pracy autorzy przedyskutowali metodologię usprawnienia symulacji wielobry- łowej poprzez zastosowanie tzw. Rzeczywistości Poszerzonej (Augmented Reality – AR).Metodologia ta obejmuje obróbkę cyfrową obrazu pozyskanego kamerą z rozsze- rzeniem polegającym na dodaniu do tego obrazu grafiki komputerowej. Celem pracy jest pokazanie, na ile najnowsze osiągnięcia komputerowo wspomaganej obróbki ob- razu mogą poprawić wrażenie realizmu zapisywanych cyfrowo obrazów i polepszyć interaktywność z użytkownikiem podczas symulacji ruchu uzyskanychmakiet rzeczy- wistychprzedmiotów.Zaprezentowanokilkaprzykładówzomówieniemzastosowanych programów i parametrówsprzętu komputerowego.Podstawowa idea została przedsta- wiona na przykładzie ciała spadającego pod wpływem siły ciężkości na inny obiekt przy możliwości interakcji użytkownika, który wybiera warunki początkowe upadku. Następnie opisano bardziej złożony model, tj. kinematykę robota, w którym ruch chwytakamoże być kontrolowany poprzez użytkownika. Na koniec pokazano symula- cję dynamiki suwakowegomechanizmu korbowegow czasie rzeczywistym.Tezą pracy jest wykazanie, że integracja symulacji wielobryłowej zmożliwościamiRzeczywistości Poszerzonej (AR) posiada ogromnewalorydydaktyczne i stanowi pomocw zadaniach międzyzespołowego projektowania. Manuscript received November 25, 2009; accepted for print February 22, 2010