RATIO MATHEMATICA ISSUE N. 30 (2016) pp. 35-43 ISSN (print): 1592-7415 (online): 2282-8214 A Recursive Variant of Schwarz Type Domain Decomposition Methods Frantǐsek Bubeńık, Petr Mayer Faculty of Civil Engineering, Czech Technical University in Prague, Czech Republic Frantisek.Bubenik@cvut.cz, Petr.Mayer@cvut.cz Abstract In this paper a slightly different approach to the use of the domain decomposition method of the Schwarz type is proposed. Instead of the standard coarse space construction we propose to use a recursive solution on each domain. Thus we do not need to construct a coarse space but nevertheless we are still keeping O(1) convergence speed. For local problems we use the standard iterative solvers for which the amount of the work for one step is O(N), where N is the number of equations. Due to the fact that the overlapping is under our control we can keep total work in O(N(1+γ)) operations with arbitrary positive γ. Keywords: Domain Decomposition, Finite Element Method, Lin- ear Systems. 2000 AMS subject classifications: 97U99 doi: 10.23755/rm.v30i1.4 1 Introduction This paper deals with some aspects of the classic Schwarz alternating method. There are analyzed ways how to arrange with the deceleration of algorithms if the stepsize of a mesh for the finite element method is de- creasing. The standard two-level method is described and some alternative approaches to solve a number of local problems by the same method are proposed. 35 Frantǐsek Bubeńık and Petr Mayer 2 The Schwartz alternating method As a model problem we will solve a Poisson problem −4u(x) = f(x), x ∈ Ω ⊆ Rd. We use the finite element method to solve the problem. It leads to a linear system Ax = b. (1) Consider functions ϕ1, . . . ,ϕn as a basis and denote by Vn = span(ϕ1, . . . , ϕn) the linear hull of the functions, that is the set of all linear combinations of the functions. Let us remind that Aij = a(ϕi,ϕj) and a(u,v) = ∫ Ω uv dΩ. The matrix A for our problem is symmetric and positive definite. More- over, for many interesting choices of the basis of Vn, the matrix A is sparse, but large. One of the possible strategies to solve the system (1) is to use a sparse version of LU-decomposition. In most cases, fill-in which takes place along with Gaussian elimination makes such an approach unusable. A usual choice is then to use some iterative method. Since our matrix is a symmetric positive definite, it seems to be more advantageous to employ the conjugate gradient method. But this choice is still problematic because the amount of the work is rapidly increasing with the size of the problem. Therefore there is then more convenient to use a preconditioned conjugate gradient method with an appropriate preconditioner. As a preconditioner we can choose a slightly modified the Schwarz alter- natig method, see [2]. The original description is in [1]. We can see it as a kind of some block symmetrized Gauss-Seidel method. 2.1 Formulation of the algorithm Let us denote by nd the number of domains. For each i ∈ I = {1, 2, . . . ,nd} we define an index set Ii = { i (i) 1 , i (i) 2 , . . . , i (i) ni } . These index sets realize a cov- ering of I, i. e. I = nd⋃ i=1 Ii. This covering is not required to be disjoint. We define subspaces of Vn so that the subspace V (i) n is the linear hull of a cor- responding part of the basis of Vn, that is V (i) n = span j ∈ Ii {ϕj} . Finally we 36 A Recursive Variant of Schwarz Type Domain Decomposition Methods define subdomains Ω(i) = ⋃ j∈Ii supp (ϕj) , (2) where supp (f) denotes the support of a function f. The sizes of individual domains are n1, . . . ,nnd. We can write A (i) = A(Ii,Ii) in terms of Matlab-like notation. Matrix interpretation of local problems is N(i) = {n(i)k,l} ∈ R n×ni, where n (i) k,l = { 1 if l = i (i) k , 0 otherwise. Then A(i) = N(i) T AN(i). (3) The following algorithm describes the transition from x(k) to x(k+1). Algorithm 2.1. One step of the symmetrized Schwarz method x (k+ 0 2nd ) := x(k) for i = 1, . . . ,nd r := b−Ax(k+ i−1 2nd ) r̃ := N(i) T r A(i) := N(i) T AN(i) (♣) Solve A(i)c = r̃, i. e. c = A(i)−1r̃ x (k+ i 2nd ) := x (k+ i−1 2nd ) + N(i)c end for (4) for i = 1, . . . ,nd (5) r := b−Ax(k+ 1 2 + i−1 2nd ) r̃ := N(nd+1−i) T r A(nd+1−i) := N(nd+1−i) T AN(nd+1−i) (♣) Solve A(nd+1−i)c = r̃ x (k+ 1 2 + i 2nd ) := x (k+ 1 2 + i−1 2nd ) + N(nd+1−i)c end for end algorithm The method used in the Algorithm 2.1 can also be viewed as a variant of the block Gauss-Seidel method, but with the fact that the individual blocks can overlap. Put P (i) = A1/2N(i)(N(i) T AN(i))−1N(i) T A1/2. (6) 37 Frantǐsek Bubeńık and Petr Mayer Then P (i) 2 = A1/2N(i)(N(i) T AN(i))−1N(i) T A1/2A1/2N(i)(N(i) T AN(i))−1N(i) T A1/2 = A1/2N(i)A(i) −1 A(i)A(i) −1 N(i) T A1/2 = A1/2N(i)A(i) −1 N(i) T A1/2 = P (i). It follows that P (i) is a projection, moreover A-orthogonal. Further P (i) = P (i) T , then the projection is symmetric. Let us denote ε(k) = x(k) −x∗, where x∗ denotes the solution of the problem. Errors are analyzed in terms of the energy norm ||x||A = √ (x,x)A, where (x,y)A = x TAy. Let us note that A is a symmetric positive definite matrix and A(i) is a principal minor of A. Then A(i) is also a symmetric positive definite matrix. We have ||ε(k)||2A = ε (k)TAε(k) = ε(k) T A1/2 A1/2 ε(k) = ||A1/2ε(k)||2A. Thus A1/2ε(k−1) = (I −P (1)) . . . (I −P (nd))(I −P (nd)) . . . (I −P (1))A1/2ε(k) = M A1/2ε(k). (7) Since I−P (i) is a symmetric A−orthogonal projection then M is a symmetric matrix. And moreover, M is, according to the definition, a positive semi- definite matrix. It can be proved that M is even a positive definite matrix. 2.2 Dependence on the dimension of Vn For the following considerations we suppose that piecewise linear finite elements are used. In that case n = O(1/hd), where h is the stepsize of a mesh. If we try to keep domains with the same geometry, the amount of elements will increase as O((H/h)d), where H is the typical size of a domain. Then, the amount of iterations is the same, but the amount of work for one step will increase. On the other hand, when we keep equal the number of elements inside a domain, then the size of the domain will decrease and then the number of domains will increase. This leads to increasing amount of iterations and slightly increasing work for one full step. Usual solution is to use a coarse space. It typically means to replace each domain by a base function. We create the coarse space and the solution of the 38 A Recursive Variant of Schwarz Type Domain Decomposition Methods problem for the corrections on the coarse level is inserted between steps (4) and (5) of the Algorithm 2.1. A detailed analysis is introduced, for example, in [2]. When it is used in a right way, we get O(1) convergence speed. 2.3 Basic convergence Let us denote E(x) = xTAx− 2xTb. (8) It is known that Ax = b if and only if E(x) assumes its minimum at x. Each step in Algorithm 2.1 means the minimization of functional (8) on a corresponding subspace and the following inequality holds for the successive terms of the minimizing sequence E(x(k+1)) ≤ E(x(k)). (9) We prove the following equivalence: Lemma 2.1. The equality in (9) occurs ⇐⇒ x(k) is the accurate solution of Ax(k) = b. Proof. It is clear that an accurate solution is equivalent to r(k) = 0, where r(k) = b−Ax(k). We prove one direction of the equivalence: Suppose that x(k) is an accurate solution and we prove the equality required. It is easy to see that if x(k) is an accurate solution then r(k) = 0 and then the equality in (9) occurs. Now we prove the opposite direction of the equivalence. We apply the proof by contradiction: suppose that x(k) is not an accurate solution and sup- pose that the equality in (9) holds. If x(k) is not an accurate solution then r(k) 6= 0. Then there exists the least index im such that N(im) T r(k) 6= 0. It causes a decrease at this step of Algorithm 2.1 and then the strict inequality E(x(k+1)) < E(x(k)). This contradicts to the assumption of equality in (9) and the proof of the equivalence in Lemma 2.1 is complete. 2 3 Recursive approach Another possibility comes from the idea that the local problems are con- ceptually identical as the original one. It opens a possibility to use the same Schwarz algorithm for solving them. It means to retain domains in the same geometry, then Ω(i) in (2) remain unchanged. On the other hand it means that the number of degrees of freedom for a domain increases. In this case we recommend to use the same algorithm for each domain separately. 39 Frantǐsek Bubeńık and Petr Mayer 3.1 Two - level variant We start from the Algorithm 2.1, and we replace both steps denoted by (♣) in the algorithm by an iterative solution for c. It means that we replace A(i) −1 r̃ by an approximate solution of the problem A(i)c = r̃. As the method we use again the algorithm 2.1 with ` steps. Then the error operator has the form M̃ = (I − P̃ (1)) . . . (I − P̃ (nd))(I − P̃ (nd)) . . . (I − P̃ (1)), where P̃ (i) = A1/2N(i)Q̃(i)N(i) T A1/2. (10) Expression (N(i) T AN(i))−1 in (6) is for short denoted by Q(i) and it is replaced in (10) by Q̃(i) = A(i) −1/2 [ I − ( I −A(i) 1/2 M(i)A(i) 1/2 )`] A(i) −1/2 , (11) where A(i) is from (3) and M(i) denotes the error operator to the algorithm 2.1 applied on the i−th domain. This replacement comes from the following: Let us solve a problem Ax = b and let us use the following iterative method x(i+1) = x(i) + M(b−Ax(i)), where M is a symmetric positive definite matrix. The initial approximation is x(0) = 0. (12) Let x∗ = A−1b. Then x∗ −x(i+1) = x∗ −x(i) −M(b−Ax(i)) = (I −MA)(x∗ −x(i)) and then A1/2(x∗ −x(i+1)) = (I −A1/2MA1/2)A1/2(x∗ −x(i)). After `−iterations we get A1/2(x∗ −x(`)) = (I −A1/2MA1/2)`A1/2(x∗ −x(0)) and thus x∗ −x(`) = A−1/2(I −A1/2MA1/2)`A1/2(x∗ −x(0)). 40 A Recursive Variant of Schwarz Type Domain Decomposition Methods Since x(0) = 0 we get x∗ −x(`) = A−1/2(I −A1/2MA1/2)`A1/2x∗ = A−1/2(I −A1/2MA1/2)`A−1/2b. Thus x(`) = x∗ −A−1/2(I −A1/2MA1/2)`A−1/2b = A−1/2 [ I − ( I −A1/2MA1/2 )`] A−1/2b and that is why the form of Q̃(i) in (11) and P̃ (i) in (10). 3.2 Recursive - multilevel method When we use a two-level method we need to compute P̃ (i) in (10) and for it there is required to know Q̃(i) from (11). For Q̃(i) there is necessary to know M(i) and its realization comes from the solution of a local problem on subdomains of the i−th domain. In case when these local problems are still large then the process may be repeated again and a two-level method becomes a recursive-multilevel method. As to convergence of a recursive variant of the method the same facts as in section 2.3 can be used. Then we can state that a recursive - multilevel method converges as well and we have proved the following theorem. Theorem 3.1. A recursive - multilevel method is convergent. 4 Cost analysis 4.1 Two levels The work required for solving a problem of size n is W = K n(1+β) with K and β positive constants. Let α be a relative overlapping in one dimension. The number of domains is nd. Then the size of a local problem is nloc = n nd (1 + α)d and the work needed for its solving W = K ( n nd (1 + α)d )(1+β) . 41 Frantǐsek Bubeńık and Petr Mayer Thus the work for one iteration is W = 2nd K ( n nd (1 + α)d )(1+β) = 2(1 + α)d(1+β) n β d K n(1+β) and for ` iterations on this level W = 2` (1 + α)d(1+β) n β d K n(1+β). 4.2 k levels In the k−th level we repeat the previous considerations. We have nkd subdomains and the size of one subdomain is nloc,k = n ( 1 + α nd )k . The problem is solved on each subdomain (2`)k times. Total work is W = (2`)knkd K ( n ( 1 + α nd )k)(1+β) = K ( 2`n −β d (1 + α) (1+β) )k n(1+β). 4.3 Full recursion We want to find such k that nloc,k = 1. We take the greatest possible k. This is k = ln n ln nd − ln(1 + α) . We obtain W = K ( 2`n −β d (1 + α) (1+β) ) ln n ln nd−ln(1+α) n(1+β) which is W = K exp (ln 2+ln `−β ln nd+(1+β) ln(1+α)) ln nln nd−ln(1+α) +(1+α) ln n . This expression can be improved and after some manipulations we get W = K n(1+γ), with γ = ln(1 + α) + ln 2 + ln ` ln nd − ln(1 + α) . (13) We can see from (13) that it is possible to achieve γ arbitrary small by an appropriate choice of α, `,nd. 42 A Recursive Variant of Schwarz Type Domain Decomposition Methods 5 Conclusion An alternative process to the classic two-level method with a coarse space is proposed in this paper. One of the significant advantages of the method presented here is the fact that we can extremely reduce the memory require- ments if this is called for. References [1] H. A. Schwarz, Gessamelte Mathematische Abhandlungen, volume 2, pages 133 − 143, Springer, Berlin, (1890). First published in Viertel- jahrsschrift der Naturforschenden Gesellschaft in Zürich, Über einen Grenzübergang durch alternierendes Verfahren, volume 15, (1870), pp. 272 − 286. [2] A. Toselli and O. Widlund, Domain Decomposition Methods - Algo- rithms and Theory. Springer-Verlag, Berlin Heidelberg, (2005). [3] M. Brezina and P. Vaněk, A black-box iterative solver based on a twolevel Schwarz method. Computing 63(3), (1999), 233 − 263. [4] R. Varga, Matrix Iterative Analysis. Prentice Hall, first edition, (1962). 43