AP05_1.vp 1 Introduction Project Gama for adjustment of geodetic networks was started at the department of mapping and cartography, Fac- ulty of Civil Engineering, Czech TU Prague, in 1998. At first it was planned to be only a local project with the main goal of demonstrating to students the power of object programming and at the same time being a free independent tool for com- paring of adjustment results from various sources. The Gama project received the official status of GNU software in 2001, and now consists of a C�� library (including small C�� ma- trix/vector template library gmatvec) and two programs gama-local and gama-g3, which correspond to two devel- opment branches of the project. The stable branch of the Gama project consists of the com- mand line program gama-local for adjustment of three- -dimensional geodetic networks in a local coordinates system (platform independent Qt based GUI roci-local is also available). The new development branch of the project (gama-g3) aims to adjust the geodetic networks in a global geocentric system. The stable branch (gama-local) enables common adjustment of possibly correlated horizontal direc- tions and distances, horizontal angles, slope distances and ze- nith angles, height differences, observed coordinates (used in sequential adjustment, etc.) and observed coordinate differ- ences (vectors). Although such an adjustment model has now been made obsolete by global positioning systems, it can still serve as an educational tool for demonstrating adjustment procedures to students and as a starting platform for the developing a new branch of the project (gama-g3). Numerical solution of least squares adjustment in geodesy is most commonly based on the solution of normal equations. As the Gama project was also meant to be a comparison tool, it was desirable to use a different method, and Singular Value Decomposition (SVD) was implemented as the main numeri- cal algorithm. As an testing alternative, Gama implements another algorithm from the family of orthogonal decompo- sitions based on Gram-Schmidt orthogonalization (GSO). Practical experience with both algorithms is discussed. In the Gama project, the geodetic input data are described in Exten- sible Markup Language (XML). The primary motivation for using XML was to define structured input data for adjustment of a local geodetic network. The most important feature of XML is probably the ease of defining a grammar for user data (a class of XML documents) that consequently can be vali- dated even independently of our applications. One of the goals of the Gama project is to build a collection of model geodetic networks described in XML. The lack of reliable test- ing data was one of major obstacles when testing the implementation of the numerical solution of the geodetic net- work adjustment. 2 Adjustment and analysis of observations Geodesy as a scientific discipline studies the geometry of the Earth or, from the practical point of view, the positioning of objects located on the Earth’s surface or in zones rela- tively close to it. The input information consists of geodetic observations. The spectrum of observation types dealt with by geodesy is very wide, and ranges from classical astro-geodetic observa- tions (astronomical longitude and latitude, variations and position of the Earth’s pole), measurements of geophysi- cal quantities (gravity acceleration and its local anomalies), through traditional geometric observables like directions, angles and distances to photogrammetric measurements of historical monuments. However, the main importance in geo- desy today is given to satellite global positioning systems (first of all NAVSTAR GPS and other complementary systems like DORIS or GLONASS). The key role in processing geodetic data belongs to the sphere of applied statistics in geodesy, traditionally called adjustment of observations. The processing of geodetic observa- tions is determined by the choice of an appropriate mathe- matical model, which can be symbolically expressed as f c x l( ), , � 0 , (1) where f is a vector of functions describing the relations be- tween constants c, unknown parameters x and observed quantities l. Corresponding to the three components of this model are three mathematical spaces: parameter, observa- tion and model space [1]. The three basic components of the mathematical model (1) are depicted in Fig. 1, where A, B, G and H are the matrices of corresponding linearized relations (values of constants c are not estimated in geodesy and we can consider them to be a part of model space). Models can be direct, indirect or 12 © Czech Technical University Publishing House http://ctn.cvut.cz/ap/ Acta Polytechnica Vol. 45 No. 1 /2005 Czech Technical University in Prague A Progress Report on Numerical Solutions of Least Squares Adjustment in GNU Project Gama A. Čepek, J. Pytel GNU project Gama for adjustment of geodetic networks is presented. Numerical solution of Least Squares Adjustment in the project is based on Singular Value Decomposition (SVD) and General Orthogonalization Algorithm (GSO). Both algorithms enable solution of singular systems resulting from adjustment of free geodetic networks. Keywords: geodesy, least square adjustment, local geodetic network. implicit; linear or nonlinear; can occur individually or in combinations model explicit in x: x g l� ( ) , x Gl v� � , model explicit in l: l h x� ( ) , l Hx v� � , implicit model: f x, l( ) � 0 , Ax Gl v� � � 0. 3 Least Squares and singular systems When adjusting geodetic observations we are relatively often faced with models leading to singular sets of linear equations. Typically these are models without fixed points, ie., no points with fixed coordinates are given, or the number of fixed points is not sufficient (free networks, see [6] for more information). Let us take as an example the local network with observed directions and distances from Fig. 2. The relationship be- tween the unknown adjusted coordinates and observations can be expressed after linearization as the project equations Ax l v� � , (2) where A is design matrix, x vector of unknowns, l vector of reduced observations and v vector of residuals (misclosure vector). In geodesy, the number of observations is always higher than the number of unknowns. Project equations (2) thus represent an overdetermined system, and matrix A has more rows than columns. Least Squares is the basic method used in geodesy for observation adjustment. It gives us the unique solution x of system (2) that minimizes the Euclidean norm of the residual vector min �v v . (3) A method commonly used for solving projects equa- tions (2) (model explicit in observations) is based on normal equations N A A n A l x N n � � � � � � , , .1 (4) © Czech Technical University Publishing House http://ctn.cvut.cz/ap/ 13 Czech Technical University in Prague Acta Polytechnica Vol. 45 No. 1 /2005 Fig. 1: Linear relations between parameter, observation and model spaces Fig. 2: Example of a local geodetic free network Apart from unknown vector x (and residuals v) we are always interested in geodesy in estimates of the precision of the adjusted quantities, in geodetic practice represented by the variance-covariance matrix of adjusted unknowns Cxx and adjusted observations Cll C Nxx m� � � 0 1, (5) C AC All xx� �. (6) The geometric shape of our adjusted network is defined by observed directions (or angles) and directions. If we fixed the coordinates of two or more points the network shape would necessarily be distorted. Normal equations would lead to an adjustment solution in which the residuals would be de- pendent on the coordinates of the fixed points. This way we would degrade our observations in cases when the coordi- nates of the network points are either unknown or known with lower precision. On the other hand if we consider the coordinates of all points to be free, the corresponding matrix N is inevitably singular; the columns of matrix A are linearly dependent (the network can float freely in the coordinate system) and normal equation matrix N is positive-semidefinite � � �p Np p0 , 0 . To get a unique solution we have to define additional constraints regularizing the system, preferably without de- forming of the network shape. In geodetic practice we most often meet the following approaches: � A singular system is regularized by introducing pseudo-ob- servations, typically with huge weights, that play a similar role as a set of constraint equations. � An explicit system of constraint equations is defined to make the given system regular Cx C� . (7) Normal equations then become N C C x l c �� � � � � � � � � � 0 � , (8) where � is the vector of Lagrange multipliers. In this case matrix C is problem dependent and needs to be known explicitly in advance. � The Euclidean norm of a certain subset of unknown pa- rameters vector x is minimized min , x i i x i2� �� . (9) The set of indices � can contain all elements, but more often only selected elements of x. In the case of a plane geodetic free network we can geometrically interpret the last constraint (9) as follows. By minimizing of the Euclidean norm of the residual vector (3) the shape and scale (if at least one distance is available) of the adjusted network together with the covariances of the adjusted observations are uniquely defined. The second ad- ditional constraint (9) then defines the localization of the network in the coordinate system. Apart from the adjusted network shape we simultaneously define its shift and rotation in the coordinate system. Another equivalent interpretation is that constraint (9) defines the particular solution of (2) in which the trace of the variance-covariance submatrix corresponding to indices i �� is minimal. 4 Normal equations and numerical stability The numerical solution of the adjustment of observed quantities based on normal equations can be numerically unstable and in certain cases we should prefer other numeri- cal algorithms that directly solve the project equations (2). A possible source of troubles are the normal equations them- selves, or more precisely the condition number of normal equations. Let us restrict our discussion here to the simple case when matrix A does not contain linearly dependent col- umns and matrix N is positive-definite. The condition number of matrix A is defined as � � � ( ) ( ) ( ) max min A A A A A � � � , (10) where �( )*�A A denotes the maximal and minimal eigenvalue of matrix �A A . If we solve a linear set of equations then its con- dition number represents the minimal upper estimate of ra- tio of relative error of x and the relative error of the right hand side l. From equation (10) it directly follows that the condition number of normal equation matrix N is the square of the condition number of the project equation matrix A � �� �( ) ( )N A� 2. (11) We can say that when solving poorly conditioned normal equations we lose twice as many of correct decimal digits in a solution x as in any direct solution of project equations. Probably the most important class of algorithms for direct solution of project equations (2) is the family of orthogonal decomposition algorithms. Apart from other goals, GNU project Gama has been planned to be a kind of benchmark, ie., a tool for checking adjustment results from other software products. For this reason it was desirable to have the adjust- ment based on a different numerical method other than the traditional solution of normal equations, and Singular Value Decomposition (SVD) was implemented as the main numerical algorithm. As an alternative, another orthogonal decomposition adjustment algorithm GSO (based on Gram- -Schmidt orthogonalization) is also available. We give a brief description of both algorithms in the following section. 5 Gram-Schmidt orthogonalization The Gram-Schmidt orthogonal decomposition is an algo- rithm for computing factorization A QR Q Q� � �, 1 , (12) where Q is the orthogonal matrix and R is the upper triangu- lar matrix. Matrix R here is identical to the upper triangular matrix of the Cholesky decomposition of normal equations N A A R Q QR R R� � � � � � � . (13) Gram-Schmidt orthogonalization is a very straightfor- ward and relatively simple algorithm that can be imple- mented in several variants differing in the order in which the vectors are orthogonalized. The following three algorithms are adopted from [2, p. 300–301]. 14 © Czech Technical University Publishing House http://ctn.cvut.cz/ap/ Acta Polytechnica Vol. 45 No. 1 /2005 Czech Technical University in Prague Algorithm 1.1 [Modified Gram–Schmidt (MGS) row version] for k � 1 2, , ,� n � : ; : ( � � ) ;( )q a r q qk k k kk k T k� � 1 2 q q rk k kk: � ;� for i k n� � 1, ,� r q a a a r qki k T i k i k i k ki k: ( ) ( ) ( ); : ;� � � �1 end end Algorithm 1.2 [Modified Gram–Schmidt (MGS) column version] for k � 1 2, , ,� n for i k� �1 2 1, , ,� r q a a a r qik k T k i k i k i ik i: ; : ; ( ) ( ) ( )� � ��1 end � : ; : ( � � ) ;( )q a r q qk k k kk k T k� � 1 2 q q rk k kk: � ;� end Algorithms 1.3 [Classical Gram–Schmidt (CGS)] for k � 1 2, , ,� n for i k� �1 2 1, , ,� r q aik i T k: ;� end � :q a r qk k ik ii k � � � � � 1 1 r q q q q rkk k T k k k kk: ( � � ) ; : � ;� � 1 2 end It should not be forgotten that the variant known as Classi- cal Gram–Schmidt has very poor numerical properties in that there is typically a severe loss of orthogonality among the computed qi. A rearrangement of the calculation, known as Modified Gram–Schmidt, yields a much sounder computational procedure [3, p. 230–232]. 5.1 Generalized orthogonalization algorithm (GSO) The generalized orthogonalization algorithm (GSO), a method based on Gram-Schmidt orthogonalization, for numerical solution of various adjustment models in geodesy was elaborated by František Charamza [4, 5]. GSO was im- plemented in GNU Gama to conserve this rarely used but interesting method, and to offer an alternative numerical algorithm to SVD (which we expected to give better numerical results for numerically unstable systems). Algorithm GSO operates on a block matrix structure M M M M Q Q Qv Q 1 2 3 4 1 2 3 4 � � � � � � � , (14) where the transition from M to Q is defined by the equations � �Q Q1 1 1 , (15) M Q R1 1� , (16) Q M R1 1 1� � , Q M Q Q M2 2 1 1 2� � � , (17) Q M R3 3 1� � , Q M Q Q M4 4 3 1 2� � � , (18) and R is the upper triangular matrix. Algorithm MGS is applied to block matrix M so that the column dot products are computed only for submatrices (M1, M2), the projections rkiqk are computed for full columns of M and the whole process is terminated after of all columns of submatrix ( M1, M3 �) have been processed. This step is called the first orthogonalization in algorithm GSO. Let us take as an example the linear system of project equations (2) Ax l v� � and apply algorithm GSO to the block matrix A l Q v R x �� � � � � � � �1 0 1 . The result is directly the vector of unknown parameters x and the vector of residuals v. The cofactors (weight coeffi- cients) of the adjusted parameters qx xi j are available as the dot products of rows i and j of submatrix R�1, the cofactors of the adjusted observations ql lm n are computed as the dot products of rows m and n of submatrix Q and the mixed cofactors qx li n similarly as the dot products of the i-th row of R�1 and the n-th row of matrix Q. 5.2 Algorithm GSO and singular systems Let us suppose now that project equations matrix A con- tains r linearly independent columns and the remaining d linearly dependent columns. Without loss of generality we can assume that the linearly dependent columns are located in the right part of matrix A. We denote linearly independent columns A1, linearly dependent columns A2 and the matrix of their linear combinations � A A A A A x x x � � � � � � ( ) ,1 2 2 1 1 2 , , � . (19) Now we can rewrite the project equations as v A x A x l A x x l A x l� � � � � � � �1 1 2 2 1 1 2 1( ) ~� . (20) As matrix A1 does not contain linearly dependent columns, a unique solution ~x of (20) exists that minimizes the Euclidean norm of v. If we know matrix � and vector ~x then any solution x of ~ ( , )x x x x� � �1 2� �1 (21) is at the same time the Least Square solution of (20) with the same vector of residuals v. If we apply algorithm GSO to the matrix M M M M M A A l I 1 I 2 I 3 I 4 I� � � � � �� � � 1 2 1 0 0 0 1 0 (22) we receive a block matrix Q Q v R xI � � � � � � 0 0 1 0 1 1 � ~ . (23) © Czech Technical University Publishing House http://ctn.cvut.cz/ap/ 15 Czech Technical University in Prague Acta Polytechnica Vol. 45 No. 1 /2005 In the case of singular systems in GSO we have the first orthogonalization, which defines a particular solution in which the unknown parameters corresponding to the linearly de- pendent columns of A are set to zero. From CGS it emerges di- rectly that matrix � is the matrix of the linear combinations from (19). The cofactors are computed the same way as in the case of regular systems. When computing GSO numerically we naturally do not obtain exactly zero vectors on the positions of the (almost) linearly dependent columns. We declare to be linearly de- pendent those columns of A whose norms drop below a given tolerance. During the first orthogonalization we set to zero corresponding subvectors in the area of A2. These values can be considered as random noise that adds no information to the whole solution. The result of the first orthogonalization are first of all the vector of the residuals and cofactors of the adjusted obser- vations. It now remains to determine the vector of the unknown parameters x that satisfies condition (9) and its cofactors (weight coefficients). This step of GSO is called sec- ond orthogonalization. By solving the system of linear equations �� � � � � � � � 1 0 x x 2 ~ (24) we get, according to (21), a vector x with the minimal norm. If we select from (24) only certain rows, we obtain the solution minimizing the corresponding subvector. This system can naturally be solved using GSO. If we need the cofactors of the adjusted unknowns, as is the standard case with geodetic applications, we have to pro- cess during the second orthogonalization the whole lower submatrix that resulted from the first orthogonalization step. M M M R xII II II� � �� � � � ( ) ~ 1 1 1 1 � 1 0 0 . (25) During the first orthogonalization, the linearly dependent columns in M1 are identified and are explicitly zeroed. The result of the first orthogonalization is a particular solution in which the unknowns corresponding to the linearly dependent columns are all set to zero. Naturally their cofactors are zero as well. During the second orthogonalization step only the sub- matrix (Q Q3 4 I I, ) is influenced and the orthogonalization pro- cess is carried out as follows � Gram-Schmidt orthogonalization runs only through col- umns corresponding to the linearly dependent columns of M1 as if they were numbered 1, 2, … , d, where d is the nul- lity of M1 � dot products are computed only for indices i� � from the regularization condition min , x i i x i2� ��. Linearly dependent columns are zeroed during the sec- ond orthogonalization even in the region of submatrix (Q3, Q4). The cofactors after the second orthogonalization are computed in the same way as in the case of regular systems. 6 Singular Value Decomposition (SVD) For any real m × n matrix A, m � n, there exists the singular value decomposition A UW V� � (26) � � � � � �U U VV V V1 1, , where U is an m × n matrix with orthogonal columns, W is a diagonal matrix n × n with nonnegative elements and V is a square orthogonal matrix n × n, (this variant is referred to as the thin SVD [3]). Matrix W is uniquely determined up to the permutation of its diagonal elements. The diagonal elements wi are called singular values of matrix A. Their squares are eigenvalues of n × n matrix �A A. Thus, the condition number of matrix A can be computed as the ratio of the maximal and minimal singu- lar value. �( ) max min A � w w . (27) With singular decomposition we can directly express the vector of unknown parameters x from the project equations Ax l x VW U l W� � � �� �, , ( )1 1 1diag wi . (28) If matrix A has more rows than columns (overdetermined system), then the Euclidean norm of the residual vector v Ax l� � is minimal and vector x is the Least Squares solution to the project equations (2). For a matrix A with linearly dependent columns d singular values are zero (d is the dimension of null space of A). Singu- lar value decomposition explicitly constructs the orthonormal vector basis of the null space and the range of A. The columns of matrix U corresponding to nonzero singular values wi form the orthonormal base of the range of A. Similarly, the col- umns of matrix V corresponding to nonzero singular values form the orthonormal basis of the null space of A. � �� �A x A x x� � �0, n � �� �A y y A x x� � �, n . In the case of rank deficient systems, we set into the diago- nal of inverse matrix W �1 zeros instead of reciprocals for elements corresponding to linearly dependent columns A W � � � � � � � 1 1 0 0 0 diag for for w w w i i i . (29) The resulting particular solution x minimizes both the Euclid- ean norm of the residuals and at the same time the norm of unknown parameters x. The rather surprising replacement of reciprocal 1 0 � � by zero can be explained as follows. The solution vector x of the overdetermined system Ax l� can be expressed as the linear combination of the columns of matrix V x U l V� � � � � � 1 1 wi i i i n ( ) ( ). (30) 16 © Czech Technical University Publishing House e http://ctn.cvut.cz/ap/ Acta Polytechnica Vol. 45 No. 1 /2005 Czech Technical University in Prague The coefficients in parentheses are dot products of col- umns U and right hand side l multiplied by the reciprocal value of the singular value. The zero singular values corre- spond to the linearly dependent columns of matrix A that add no other information to the given system. Setting corre- sponding diagonal elements of matrix W �1 to zeros is equi- valent to eliminating the linearly dependent columns from matrix A. With matrix W �1 defined according to (29), the cofactors computed the same way for regular and singular systems Q N A A VW U UWV VW W Vxx � � � � � � � � � � � � � �1 1 1 1( ) ( ) T , (31) Q AQ A UWV VW W V VW U UUll xx� � � � � � � � � � �( ))( )(T1 , (32) Q AQ UWV VW W V UW Vlx xx� � � � � � � � �1 1T . (33) The cofactors (weight coefficients) for the adjusted pa- rameters, observations and mixed cofactors are computed, similarly as in the case of GSO, as the dot products of the rows of matrices U and V ; multiplied by the diagonal elements of W �1 in the case of cofactors of x. 6.1 Algorithm SVD and singular systems What now remains is to show how to compute the particu- lar solution that minimizes only a given subset of subvector x according to the second regularization condition (9). We com- pose an overdetermined system of linear equations �c x x� � � , (34) where the columns of matrix � are vectors of null space basis � � �( , , , ), ,( ) ( ) ( )V V Vi i i id nw1 2 0� and c is the vector of the coefficients of the linear combination of null space basis vectors that, when added to vector x, mini- mizes the selected subvector of unknown parameters �x (here they act as residuals). From a comparison of (34) with equations (24) and (25) it is obvious that for computing �x we can use the second orthogonalization of algorithm GSO. If the GSO second orthogonalization is applied to matrix V from the singular decomposition M M MII II II� �( ) ( )1 2 � � , (35) we obtain matrix �V . If we now replace the singular value de- composition matrix V by matrix �V , we can compute vector �x and all cofactors according to the same formulas (30–33) as in the case of standard SVD solution x. 7 Network adjustment in GNU Gama Gama was started in 1998 as a local educational project, mainly to demonstrate to our students the power and capabil- ity of object programming (the project is written in C��), and at the same time to show some alternatives to traditional ap- proaches to numerical solutions of Least Squares adjustments based on normal equations. Project Gama was released under the terms of the GNU General Public license, and in 2001 received the official status of GNU software. Numerical solution of geodetic network adjustment in Gama is based on an abstract C�� class and currently two derived classes are available implementing algorithms SVD and GSO. SVD is the primary algorithm used in Gama (one of our long term goals is to add more numerical solutions, namely solutions exploiting the sparse structure of project equations). From this perspective, algorithm GSO was imple- mented in Gama only as an testing alternative, both for com- paring numerical results and for testing the hierarchy of the adjustment classes in practice. It is generally agreed that a bad implementation of GSO can produce disastrous results. For example, during the first orthogonalization step of GSO we set to zeros the unknown parameters corresponding to linearly dependent columns. In the case of a free geodetic network adjustment these are the coordinates of certain points – the whole network is pinned on these points and clearly, if close points are selected the regularization is unstable. The order of the columns in the orthogonalization is important. From practical experience we know that vector norms in the GSO orthogonalization process generally tend to de- crease. As GSO is just an alternative algorithm in Gama and its performance is not a crucial point, we implemented it with full pivoting, i.e., in each orthogonalization cycle the vector with the maximal norm is selected as a pivot (with this modifi- cation GSO is about twice as slow as SVD for large networks). Singular Value Decomposition is a very robust method for dealing with systems that are either singular or numerically close to singular. Even with full pivoting we had expected GSO to prove to be inferior to SVD, at least in cases with illcon- ditioned matrices. Surprisingly, with all the real geodetic networks that we have available this was not the case. Apart from real data, we used series of randomly generated three- -dimensional networks for testing. Our implementation of SVD is based on a classical algo- rithm published by Golub and Reinsch [7] (the ALGOL procedure SVD). The decomposition is constructed in two phases. It starts with the Householder reduction to bidiagonal form, followed by diagonalization. Contrary to our expecta- tions, SVD as used in Gama has not proved to give numeri- cally better results and in some cases it has even lost conver- gence in the diagonalization phase. A simple and tempting explanation that comes first to mind would be that the SVD implementation in Gama is somehow wrong. After all testing and revisions this does not seem to be the point. A possible explanation might be given by the following quotation from [3] … Finally, we mention Jacobi’s method … for the SVD. This transformation method repeatedly multiplies A on the right by elementary orthogonal matrices (Jacobi rotations) until A converges to U�; the product of the Jacobi rotations is V. Jacobi is slower than any of the above transformation methods (it can be made to run within about twice the time of QR … ) but has the useful property that for certain A it can deliver the tiny singular values, and their singular vectors, much more accurately than any of the above methods pro- vided that it is properly implemented… Surely to have more numerical methods implemented in Gama would be helpful, for example the above mentioned Jacobi method for SVD. A practical problem during testing of the adjustment methods in Gama was the relative shortage of reliable obser- vation data and their adjustment results for testing. To enable easy comparison with other softwares we made a description of geodetic networks in XML (we use DTD for the definition © Czech Technical University Publishing Housee http://ctn.cvut.cz/ap/ 17 Czech Technical University in Prague Acta Polytechnica Vol. 45 No. 1 /2005 of the formal syntax of our structured data). Conversion from a well defined data format into XML is a relatively simple task, but processing of XML is not a trivial task and can- not be done without an XML parser. In the GNU Gama project we use the XML parser expat by James Clark, see http://expat.sourceforge.net/. We believe that XML is the best data format for description and exchange of structured data in the Gama project. One of the goals of our project is to compile a free collection of geodetic networks described in XML. References [1] Vaníček, P., Krakiwsky, E. J.: “Geodesy: The Concepts”, North-Holland, 2nd. ed. (1986), ISBN 0-444-87775-4. [2] Björck, : “Numerics of Gram-Schmidt Orthogonaliza- tion”. Linear Algebra and Its Applications Vol. 197, 198, (1994), p. 297–316. [3] Golub, G. H., Van Loan, C. F.: “Matrix Computations”, 3rd edition, The John Hopkins University Press 1996, ISBN 0-8018-5413-X. [4] Charamza, F.: “GSO – An Algorithm for Solving Least- -Squares Problems with Possibly Rank Deficient Ma- trices”. Optimization of Design and Computation of Control Networks, Budapest, Akadémiai Kiadó, 1979. [5] Charamza, F.: “An Algorithm for the Minimum-Length Least-Squares Solution of a Set of Observation Equa- tions”. Studia geoph. et geod., Vol 22, (1978), p. 129–139. [6] Koch, K. R.: “Parameter Estimation and Hypothesis Testing in Linear Models”. 2nd ed. Springer-Verlag (1999), ISBN 3-540-65257-4. [7] Golub, G. H., Reinsch, C.: “Singular Value Decomposi- tion and Least Squares Solutions”. Numer. Math. 14, (1970), p. 403–420, Handbook for Auto. Comp., Vol. II – Linear Algebra, (1971), p. 134–151. [8] Demmel, J. W.: “Singular Value Decomposition”. In Z. Bai, J. Demmel, J. Dongarra, A. Ruhe, and H. van der Vorst, editors. Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide. SIAM, Philadel- phia, 2000, http://www.cs.utk.edu/~dongarra/etemplates/book.html Prof. Ing. Aleš Čepek, CSc. phone: +420 224 354 647 e-mail: cepek@fsv.cvut.cz Department of Mapping and Cartography Ing. Jan Pytel phone: +420 224 354 644 e-mail: pytel@gama.fsv.cvut.cz Czech Technical University in Prague Faculty of Civil Engineering Thákurova 7 166 29 Prague, Czech Republic 18 © Czech Technical University Publishing Housee http://ctn.cvut.cz/ap/ Acta Polytechnica Vol. 45 No. 1 /2005 Czech Technical University in Prague