@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ÚÓ‘Ój�n€a@Î@Úœäñ€a@‚Ï‹»‹€@·rÓ:a@Âig@Ú‹©@Ü‹1a26@@ÖÜ»€a@I2@‚b«@H2013 Ibn Al-Haitham Jour. for Pure & Appl. Sci. Vol. 26 (2) 2013 Error Analysis in Numerical Algorithms Shawki. A. Abbas Department of Mathematics / College of Science / University of Baghdad Received in: 9 May 2012 , Accepted in : 20 November 2012 Abstract In this paper, we applied the concept of the error analysis using the linearization method and new condition numbers constituting optimal bounds in appraisals of the possible errors. Evaluations of finite continued fractions, computations of determinates of tridiagonal systems, of determinates of second order and a "fast" complex multiplication. As in Horner's scheme, present rounding error analysis of product and summation algorithms. The error estimates are tested by numerical examples. The executed program for calculation is "MATLAB 7" from the website "Mathworks.com" Key words : Horner's scheme's, tridiagonial systems, linearization method 265 | Mathematics @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ÚÓ‘Ój�n€a@Î@Úœäñ€a@‚Ï‹»‹€@·rÓ:a@Âig@Ú‹©@Ü‹1a26@@ÖÜ»€a@I2@‚b«@H2013 Ibn Al-Haitham Jour. for Pure & Appl. Sci. Vol. 26 (2) 2013 Primarily Horner's method can be used to convert between different positional number systems. In which case x is the base of the number system, and the aj coefficients are the digits of the base x representation of a given number and can also be used if x is a matrix, in which case the gain in the computational efficiency is even greater. In fact when x is a matrix, further acceleration is possible which exploits the structure of matrix multiplication, and only n instead of n multiplies are needed (at the expense of requiring more storage) using the 1973 method of Paterson and Stockmeyer [1]. For example, to find the product of two numbers , (0.15625)and m. (0.15625) m = (0.00101b) m = (2-3+2-5) m = (2-3) m + (2-5) m = 2-3 (m + (2-2) m) = 2-3 ( m +2-2 (m) ). Introduction In a series of textbooks and papers the linearization method is explained and applied to simple examples, in particular, associated error estimates had been missing. These questions have now been thoroughly studied and answered in our perturbation theory for evaluation algorithms of arithmetic expression [2]. The theortical results are applied in this paper to a series of elementary but important algorithms and, at the same time, tested numerically. The results show that our new condition numbers yield much more detailed informations on possible errors than wilkinson's [3] backward analysis. We analyze the computation of sequences of partial products and sums. The computation of partial products is always a well – conditioned algorithm, while in the class of elementary one – step algorithms is introduced. In particular the associated absolute and relative a priori and a posteriori, condition numbers are established. The error analysis of Horner's scheme shows that our posteriori condition numbers in essence, coincide with error bounds described by Adams [4]. The Taylor & the Newton form of a polynomial. Another important member of the class of elementary one- step algorithms is the Horner like algorithm for evaluating finite continued fractions. A first numerical example deals with rounding errors in the evaluation of partial fractions for )tan(z z . Next the recursive computation of determinates of tridiagonal matrices is analyzed and the error estimate tested numerically with success by a matrix of l00 rows. In comparison with the stability constants of Babuska [5] for solving tridiagonal linear system. Our condition numbers constitutes explicit measures for the accumulated errors. Finally, with the evaluation of determiants of second order. As an application the numerical stability of the common complex multiplication and a "fast" complex multiplication are analyzed and compared. The detailed rounding error analysis of numerically solving two linear equations in two unknowns has already been established [6]. An error analysis of Gaussian elimination for general linear systems is in preparation. The rounding error analysis of difference and extrapolation schemes is found by our method [7]. The study of numerical algorithms proceeds in the following way first the sequence ),,( 0 nfff = of input and arithmetic operations is determined, specifying the algorithm. Then the graph of the functional dependences, the paths of error propagation, and associated weights are described. The system of linear error equations can easily be read from the graph of the algorithm. Its solutions yield approximations of the absolute and relative a priori and a posteriori errors, respectively. That in this case the condition numbers can easily be determined by recursion formulae. 266 | Mathematics @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ÚÓ‘Ój�n€a@Î@Úœäñ€a@‚Ï‹»‹€@·rÓ:a@Âig@Ú‹©@Ü‹1a26@@ÖÜ»€a@I2@‚b«@H2013 Ibn Al-Haitham Jour. for Pure & Appl. Sci. Vol. 26 (2) 2013 Let ),,(),,,( 00 nnuuu υυυ  == denote the sequences of input data, intermediate and final results and their numerical approximations. By ttt uu −=∆ υ are meant the absolute and by ttt uuPu /∆= the relative a priori errors of the approximations tt uofυ . The fundamental a priori error estimates then read ),(,)( 22 ηηρηησ tttttt OPuOu +≤+≤∆ t=0,…,n …(1) where tt ρσ , are the absolute and relative a priori condition numbers and η is the floating point accuracy constant. The numerical examples in the paper are computed in the decimal floating point arithmetic. The symmetric rounding function Nrd is performed by rounding the 10 digit results to N decimal places. N digits decimal floating point arithmetic is realized by rounding to N places after each operation. The constant η in the numerical examples is thus specified by N−×= 105η …(2) In general, 1<η such that the remaindes terms )( 2ηtO in (1) can be neglected against the first order terms. In applying the error estimate in (1) to numerical examples, it should be noticed that the condition numbers yield optimal bounds of the possible errors. The actual errors, as a rule, are significantly smaller than indicated by ηρησ tt , . By variation of the parameters, however, we have in most cases, found examples where the actual maximal errors are over estimated at most by factor 5 to 10 such that the magnitude of the error is described correctly. The sign • = indicates that the numerical result has been computed in higher precision than the given digits show. A priori condition numbers and a priori errors have always been computed with the highest precision of 10 decimal places. Computation of Products Algorithm The methods of our error analysis differ essentially in the number of steps of the idea of algorithm from wilkinson's rounding error analysis (see Wilkinson [3]). Therefore, the usual product algorithm are discussed thoroughly in this paper. In particular, the condition numbers for perturbations by rounding errors in the arithmetic operations and for data perturbations will be determined and the general error estimates (1) be tested by selected numerical examples. 2.1 Sequences of Partial Products A finite or infinite product is computed by means of sequence of partial products ntububu ttt ,,2,1,, 100 === − …(3) Such that ∏ = = t j jt bU 0 …(4) Introducing the indeterminate steps t t bu = − 2 1 , this algorithm is defined, analogously by the sequence of functions ntforxxxFbxFbxF t t tt t ,,1,)(,)(,)( 1 2 1 2 100 ==== − −− …(5) The associated linear relative a priori error equation read 267 | Mathematics @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ÚÓ‘Ój�n€a@Î@Úœäñ€a@‚Ï‹»‹€@·rÓ:a@Âig@Ú‹©@Ü‹1a26@@ÖÜ»€a@I2@‚b«@H2013 Ibn Al-Haitham Jour. for Pure & Appl. Sci. Vol. 26 (2) 2013 x tt t t b t t b errrerer ++=== − −− 1 2 1 2 10 ,,0 , or nteerrer xt b ttt b ,,1,, 10 0 =++== − ...(6) To simplify notation, as here so in the following by ,, ct b t ee are meant relative errors ,, 1PcPbt and by ,,, tt x t eee ′ + the relative rounding errors of the floating point operations ,/,, +× the graph of this algorithm is a tree, all non zero weights reltkb are equal to one (see Fig 1.1). Obviously, the linear system (6) has the solution nteeer t j x j b j b t ,,1,)( 0 0 =++= ∑ = …(7) From this representation one immediately derives the associated relative a priori condition numbers Dtρ of data perturbations only ( ,0= x je )η≤bje , and R tρ of perturbations by rounding errors in the arithmetic operations only ( ,0=bje )η≤ x je : 12,,1 +=+==+= ttt Rt D tt R t D t ρρρρρ …(8) Consequently ntDt R t ,,1,1 =−= ρρ , …(9) So that the computation of partial products is well- conditioned algorithm. 1. Numerical Examples Example 3.1 ,2,1,)01.(exp == tU tt …(10) This sequence of powers is established by the above algorithm for 01.101.exp,10 ==== bbb t We compute the sequence numerically in 2 digits decimal floating points. The basis b rounded to 2 digits, becomes 01.1=′b having the relative error 51093.4 −×== bbt ee . In this case, (7) yields the representation ∑ = − +××= t j x jt etr 1 51093.4 ...(11) Table 1.1.1 contains the relative errors tPu divided by t for t=10,20,…,300. It turns out that tPu increases, in essence. Linearly with t as te b. the relative errors xte seem not to participate in the growth of the error. Table 1.1.1 shows that the mean values over xte cancel to can be estimated by 75 1 1051093.4 11 −−• = ×≤×−=∑ t t j x j Put e t …(12) For t= 120,…,300. thus it is seen that mean value sequence ( xje ) are by a factor of 2101 −× less than the floating point accuracy constant 5105×=η . That is, randomly distributed rounding errors xte cancle to a considerable extent. 268 | Mathematics @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ÚÓ‘Ój�n€a@Î@Úœäñ€a@‚Ï‹»‹€@·rÓ:a@Âig@Ú‹©@Ü‹1a26@@ÖÜ»€a@I2@‚b«@H2013 Ibn Al-Haitham Jour. for Pure & Appl. Sci. Vol. 26 (2) 2013 Example 3.2 ∏ ∞ = −= 1 2 2 )1( )sin( k k x x x π π …(13) (see Abramowitz-stequn [7,p.75]). The associated sequence of partial products is computed for 21035.7 13 3sin ,17.2 −×=== ππ π x x x in the form ,2,1,1,))((,1 2 2 10 =−=== − tt x bbrdrd tttNNt υυυ , …(14) In 6- digit decimal floating point such that 6105 −×=η . The exact partial products Ut are computed approximately in 10 – digit decimal floating point. Table 1.1.2 lists, blockwise, the index t, the numerical approximation tυ , the associated relative error tttt uuu /)( −= υρ and the error sums ∑∑∑∑ ==== t j x j t j x j t j b j t j b j eeee 1111 ,,, …(15) The numerical results show that the absolute error sums of the sequences ( bje ) , ( x je ) increase, in essence, linearly with t. in the error sums, however, the errors bje , x je again cancel to a large extent. Accordingly, the relative errors tPu do not grow systematically but remain bounded in modulus by about 51048 −×=η . In addition, it is easily verified using the listed results that the linear error approximations rt in (7) approximate the relative error tPu very accurately. 2.1 The Summation Algorithm The algorithm ntcuucu ttt ,,1,, 100 =+== − …(1) yields the sequence of partial sums ∑ = = t j jt cu 0 ...(2) specifying the input of coefficients by intermediate steps t t cu = − 2 1 , the algorithm is defined by the sequence of functions ntforxxxFcxFcxF t ttt t ,,1,)(,)(,)( 2 11 2 100 =+=== − − − …(3) The linear absolute a priori error equations read ntforeuecssecs tt c tttt c ,,1,, 1000 =++== + − …(4) The associated graph (see Fig 1.2) is a tree, all nonvanishing weights abstkb are equal to one. The absolute a priori condition number tσ are determined according to refences, with weights ,, 2 1 T ttt c tt t uc γαγα == − recursively by ntucc Ttt c tttt c ,,1,, 1000 =++== − γγσσγσ …(5) These condition numbers have the expicit representation ∑ = ++= t j T jj c jj c t ucc 1 00 )( γγγσ …(6) 269 | Mathematics @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ÚÓ‘Ój�n€a@Î@Úœäñ€a@‚Ï‹»‹€@·rÓ:a@Âig@Ú‹©@Ü‹1a26@@ÖÜ»€a@I2@‚b«@H2013 Ibn Al-Haitham Jour. for Pure & Appl. Sci. Vol. 26 (2) 2013 When the errors in the coefficients ej have the same order of magnitude as rounding errors in the arithmetic operations, the partial condition numbers Rt D t σσ , of the summation algorithm under data perturbations only, assuming exact arithmetic operations ( 0,1 == +j c j γγ ), and under rounding errors in the arithmetic operations, assuming exact data ( 1,0 == +j c j γγ ), become ,, 10 ∑∑ == == t j j R t t j j D t uc σσ …(7) The associated relative a priori condition numbers are ∑ ∑ ∑ ∑ = = = = == t j j t j j R tt j j t j j D t c u c c 0 0 0 0 , ρρ … (8) Such that R t D t t t t u ρρ σ ρ +== …(9) As an example, consider the partial sums of the exponential series ∑ ∞ = =+++= 0 2 !2!1 1exp j jc xx x  then ∑ ∞ = = 0 exp j j xc and for sufficiently large n one has    < ≥ === 0,2exp 0,1 exp exp ,exp xx x x x x Dn D n ρσ …(10) Hence for negative x the condition numbers increase exponentially with increasing x . Forsythe [13] discuss the example of computing 31008.4)5.5(exp −×=− where the terms cj of the series are computed exactly in floating point to five decimal places such that 5105 −×=≤ηρ jc . The summation is extended over so many terms until the first five digits of the partial sums remain unchanged, that is, )()( 155 += nn rdrd υυ , thus n=25, 31064.2 −×=nυ and the associated condition number 21045.25.5exp ×==Dnσ . From the error estimate (1)in the introduction we obtain that 21022.1 −×=≤∆ ησ DnnU whereas the actual absolute error 31045.1 −×=∆ nU . Analogous conditions are met in computing partial sums of the series for xxx ln,cos,sin etc and similar alternating series. Example 3.3 In the same way as above, partial sums of the Bessel functions ∑ = −= j j j j x xJ 2 2 0 )!( )4()1()( …(11) As simultaneously, the absolute a priori condition numbers 270 | Mathematics @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ÚÓ‘Ój�n€a@Î@Úœäñ€a@‚Ï‹»‹€@·rÓ:a@Âig@Ú‹©@Ü‹1a26@@ÖÜ»€a@I2@‚b«@H2013 Ibn Al-Haitham Jour. for Pure & Appl. Sci. Vol. 26 (2) 2013 )( )!( )4( 0 0 2 2 xI j xn j j D n == ∑ = σ …(12) are computed Table 1.2.1 shows some values of )(0 ′xJ and the absolute errors )()()( 000 xJxJxJ −′=∆ In the neighborhood of the fourth zero 11.79 … . The function 0I increases from 41054.1 × to 4106.1 × in this interval. Hence 8.105106.1 54 =××× − is the approximate a priori bound for the absolute errors )(0 xJ∆ . This bound is six times the largest error at 11.82 When the terms cj of the sum (2) are positive we have tRt D t ≤= ρρ ,1 …(13) and thus ntt Dt R t ,,1, =≤ ρρ …(14) Hence the computation of sums with positive terms is always quasi-well-conditioned. In contrast to the condition numbers Dtρ , the condition numbers R tρ are not independent of the ordering of the terms c0 ,…, ct . in computing the sums (2) in the floating point arithmetic of a computer, the ordering of the terms should be so chosen that the condition numbers Rnσ , R nρ and thus the bounds in the error estimates (1) become as small as possible. For a sum with positive terms this is achieved if the terms cj constitute an increasing sequence. Note that the ordering by increasing absolute values is commonly recommended with out any restrictions. However, if both positive and negative terms cj occur in the sum, this is in general, no longer true as in the following example shows. In this case the terms have to be ordered with respect to a smallest sum of the absolute values of the partial sums. Example 3.4 3233 109315.0109663.0109123.010025.1 ×−×−×−×=S …(15) where 3 3 3 0 109315.0,,10025.1 ×−=×= cc  see Wilkinson [3,I.25]. Adding 9315.0,,1025 30 −== cc  one obtains the partial sums and, by (7), (9), the condition numbers 06.20,5.135,755.6,07.16,7.112 33321 • ===== RRuuu ρσ …(16) In 4- digit decimal floating point the summation is performed without rounding errors and gives the result 76.633 == uυ In converse order 1025,,315.9 30 =−= cc  , the terms are arranged with respect to increasing absolute values. Then 4.167,1130,755.6,245.1018,945.105 33321 • ===−=−= RRuuu ρσ …(17) These condition numbers are more than eight times larger than the above condition numbers (16). In 4- digit decimal floating point now the approximate sum 000.73 =υ is computed having the relative error 23 106.3 − • ×=uρ and the approximate error bound 23 104.8 − • ×=Rρ . 271 | Mathematics @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ÚÓ‘Ój�n€a@Î@Úœäñ€a@‚Ï‹»‹€@·rÓ:a@Âig@Ú‹©@Ü‹1a26@@ÖÜ»€a@I2@‚b«@H2013 Ibn Al-Haitham Jour. for Pure & Appl. Sci. Vol. 26 (2) 2013 Example 3.5 ∑ = + = n k n k u 0 2)1( 1 …(18) We compute this partial sum of the infinite series of 645.16/3 =x for n=1023 in 6- digit decimal floating point. The exact partial sum rounded to six places in un=1.6439. the associated absolute a priori condition numbers + − ++== tt c tttt c ucc γγσσγσ 1000 , t=1,…,n Are computed recursively from the equation tttt cuc ++== −100 , σσσ t=1,…,n …(19) The relative a priori condition number is finally determined by nnn u/σρ = . The numerical summation in natural ordering yields partial sums which are constant for t ≥ m=446. Vm =1.64308 , 41029.2,445 − •• ×== mm uρρ …(20) The error bound 31023.2 − • ×=ηρm overestimates the error muρ by factor of about 10. in converse order of increasing terms the algorithm computes Vn =1.644 , 57.5 • =nρ …(21) This value coincides with the above rounded value of un in all decimal places. We observe considerable differences in the magnitude of the condition numbers (20), (21). In context, let us refer already to the result dealing cascade summation. There we shall compute Vn =1.643 , 11 • =nρ …(22) This approximation differs by one unit of the last decimal place from the rounded un. the last condition number (22) lies between the two condition numbers above such that the second summation procedure is best for this series. Conclusion The perturbations of function values p(z), due to perturbations of the argument z, have been described already. Hence we limit the further study to the case of unperturbed arguments, realized in practice by the input of machine numbers as arguments z. the bound for the absolute a posteriori error ηυ /i∆ has been stated by Tsao [12,(5.5)]. Who also proposes the recursive computations of the bound. A similar result is found in Adams [4]. Where it is attributed to kahan. The theortieal results of the paper have been applied to a series of concrete algorithms, and have proved to be very effective means of both a priori and a posteriori error analysis. References 1. Higham, N. J. (2002), " Accuracy and Stability of Numerical Algorithms" , by the society for industrial and applied mathematics. 2. Stummel, F., (1981), "Perturbation Theory for Evaluation Algorithms of Arithmetic Expression", Math. Comput, 37, P. 435-473. 3. Wilkinson, J. H., (1963), "Rounding error in algebraic processes", Prentice Hall, Englewood Cliffs: Prentice Hall. 4. Adams, D. A., (1967), "A Stopping Criterion for Polynomial Root Finding", Comm. ACM 10, P. 655-658. 272 | Mathematics @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ÚÓ‘Ój�n€a@Î@Úœäñ€a@‚Ï‹»‹€@·rÓ:a@Âig@Ú‹©@Ü‹1a26@@ÖÜ»€a@I2@‚b«@H2013 Ibn Al-Haitham Jour. for Pure & Appl. Sci. Vol. 26 (2) 2013 5. Babuska, I., (1969), "Numerical Stability in Numerical Analysis", Amstrdam. North Holland. Proc. IFIP Congress, 1968 11-23. 6. Stummel, F., (1982), "Rounding Errors in Numerical Solutions of Two Linear Equations in Two Unknowns", Math. Methods Appl. Sci. 4,P. 549-571. 7. Stumnel, F., (1985), "Forward Error Analysis of Gaussian Elimination Part I : Error and Nesidual Estimates", Numer, Math. 46,P. 365-395. 8. Abramowitz, M. and Stegun, I. A., (1967), "Hand Book of Mathematical Functions", New York, Dover 1968. 9. Wilkinson, J. H., (1968), "A Priori Error Analysis of Algebraic Processes", Proc. Int. Congress. Math. Moscow, P. 629-640. 10. Marcozzi, M. Choi, S. and Chen, C. S., (2001), "On the Use of Boundary Conditions for Variational Formulations Arising in Financial Mathematics", Appl. Mat. Comput, 124, P. 197-214. 11. Kaya, D. and EL-Sayed, S. M., (2004), "A Numerical Solution of the Klein-Gordon Equation and Convergence of the Decomposition Method", Appl. Math. Comput, 156, P. 341- 353. 12. Dehghan, M., (2006), "A Computational Study of the One-Dimensional Parabolic Equation Subject to Nonclassical Boundary Specifications", Numer, Methods Partial Differential Equations, 22, P. 220-257. 13. Tsao, N. K., (1974), "Some a Posteriori Error Bounds in Floating Point Computations", J. ACM, 21, P. 6-17. 14. Forsythe, G. E. & Moler, C. B., (1967), "Computer Solution of Linear Algebraic Systems", Englewood cliffs: Prentice Hall. 15. Fox, L., (1964), "Introduction to numerical Linear Algebra", Clarendo Press, Oxford. 16. Schonauer, W., (1987), "Scientific Computing on Vector Computers, North Holland, Amstrdam. 17. Thomas, H. Cormen, Charles, E. Leiserson, Roland L. Rivest and Clifford stein (2009)" introduction to Algorithms " 3rd ed. M. T. Press, pp 41, P. 900,990. Fig 1: Graph of the linear relative a priori error equations for sequences of partial products Fig 2: Graph of the linear absolute error equations for the summation algorithm b1 bt-1 bt b0 u1 ut-1 ut x x x c1 ct c0 u1 ut-1 ut + + 273 | Mathematics @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ÚÓ‘Ój�n€a@Î@Úœäñ€a@‚Ï‹»‹€@·rÓ:a@Âig@Ú‹©@Ü‹1a26@@ÖÜ»€a@I2@‚b«@H2013 Ibn Al-Haitham Jour. for Pure & Appl. Sci. Vol. 26 (2) 2013 Table( 1.1.1): Relative errors in the computation of (exp.01)t t tPut 510 t tPut 510 t tPut 510 10 3.9 110 4.9 210 4.9 20 4.5 120 4.9 220 4.9 30 4.6 130 4.9 230 4.9 40 4.7 140 4.9 240 4.9 50 4.8 150 4.9 250 4.9 60 4.8 160 4.9 260 4.9 70 4.8 170 4.9 270 4.9 80 4.8 180 4.9 280 4.9 90 4.8 190 4.9 290 4.9 100 4.9 200 4.9 300 4.9 Table( 1.1.2): computation of partial products for 17.2, )sin( • =x x x π π 100 200 300 400 500 7.69 210−× 7.51 210−× 7.46 210−× 7.43 210−× 7.41 210−× -6.21 610−× -1.10 510−× -1.46 510−× -2.30 510−× -3.17 510−× -3.48 610−× -1.61 610−× -4.07 610−× -3.43 610−× -2.48 610−× 2.38 510−× 5.05 510−× 7.71 510−× 1.02 410−× 1.27 410−× -2.72 610−× -9.43 610−× -1.05 510−× -1.96 510−× -2.93 510−× 5.37 510−× 8.59 510−× 1.20 410−× 1.53 410−× 1.86 410−× 600 700 800 900 1000 7.40 210−× 7.39 210−× 7.39 210−× 7.38 210−× 7.38 210−× -2.73 510−× -1.69 510−× -3.74 510−× 5.87 610−× -1.49 510−× -5.35 710−× -2.54 610−× -1.36 610−× 8.69 710−× -1.08 610−× 1.52 410−× 1.77 410−× 2.04 410−× 2.25 410−× 2.49 410−× -2.69 510−× -1.44 510−× -3.61 510−× 5.00 610−× -1.39 510−× 2.23 410−× 2.56 410−× 2.84 410−× 3.25 410−× 3.71 410−× Table( 1.2.1): Numerical Computation of J0(x) x )(0 xJ )(0 ′xJ )(0 xJ∆ 11.78 -2.68 310−× -5.21 210−× -4.95 210−× 11.79 -3.56 410−× 7.52 210−× 7.52 210−× 11.80 1.97 310−× -9.17 210−× -9.37 210−× 11.81 4.29 310−× -2.47 210−× -2.89 210−× 11.82 6.61 310−× 1.38 110−× 1.32 110−× 11.83 8.93 310−× 3.72 210−× 2.83 210−× 274 | Mathematics @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ÚÓ‘Ój�n€a@Î@Úœäñ€a@‚Ï‹»‹€@·rÓ:a@Âig@Ú‹©@Ü‹1a26@@ÖÜ»€a@I2@‚b«@H2013 Ibn Al-Haitham Jour. for Pure & Appl. Sci. Vol. 26 (2) 2013 تحلیل األخطاء في الخوارزمیات العددیة شوقي عبد المطلب عباس قسم الریاضیات / كلیة العلوم / جامعة بغداد 2012تشرین الثاني 20قبل البحث في : ، 2012أیار 9أستلم البحث في : الخالصة واالع�داد الش�رطیة الجدی�دة الت�ي ، مف�اھیم تحلی�ل االخط�اء ال�ذي اس�تخدمت فی�ھ الطریق�ة الخطی�ة طبقتفي ھذا البحث یم االخطاء المحتملة. تقدیر الكسور المستمرة المحدودة وحس�اب االنظم�ة الثالثی�ة المح�دودة للمح�ددات یتشكل الحد االمثل لتق والضرب السریع المركب كما في طریقة ھ�ورنر، بوج�ود تحلی�ل اخط�اء الت�دویر لخوارزمی�ات الض�رب ،من الدرجة الثانیة والجمع. " .MATLAB 7اما اسم البرنامج الذي استخدم في الحساب فھو " ،تقدیرات االخطاء قد اختبرت بأمثلة عددیة ة.مخطط ھورنر، النظام الثالثي ، الطریقة الخطی: الكلمات المفتاحیة 275 | Mathematics