J. Nig. Soc. Phys. Sci. 5 (2023) 1112 Journal of the Nigerian Society of Physical Sciences A Review on Quadrant Interlocking Factorization: W Z and W H Factorization Dlal Bashira,∗, Hailiza Kamarulhailia, Olayiwola Babarinsab aSchool of Mathematical Sciences, Universiti Sains Malaysia, 11800 Pulau Pinang, Malaysia bDepartment of Mathematics, Federal University Lokoja, Kogi State, Nigeria Abstract Quadrant Interlocking Factorization (QIF ), an alternative to LU factorization, is suitable for factorizing invertible matrix A such that det(A) , 0. The QIF , with its application in parallel computing, is the most efficient matrix factorization technique yet underutilized. The two forms of QIF among others, which are not only similar in algorithm but also in computation, are W Z factorization and W H factorization yet differs in applications and properties. This review discusses both the old form of QIF, called W Z factorization, and the latest form of QIF, called W H factorization, with an example and open questions to further the studies between the two factorization techniques. DOI:10.46481/jnsps.2023.1112 Keywords: Quadrant interlocking factorization, W Z factorization, W H factorization, LU factorization, Linear systems Article History : Received: 06 October 2022 Received in revised form: 14 January 2023 Accepted for publication: 14 January 2023 Published: 24 February 2023 © 2023 The Author(s). Published by the Nigerian Society of Physical Sciences under the terms of the Creative Commons Attribution 4.0 International license (https://creativecommons.org/licenses/by/4.0). Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI. Communicated by: S. Fadugba 1. Introduction Over a century, factorization of a square matrix has been the research interest of matrix theorists [1]. Many factorization techniques were deployed such as LU , QR and Cholesky fac- torization. LU factorization is a well-known method with high accuracy deployed in enigma machine during World War I. LU factorization was introduced to solve square system of linear equations by inverting a matrix with underlying Gaussian elim- ination procedure [2]. This feature combined with the low com- putational complexity and partial pivoting techniques makes LU -factorization extremely efficient. Without a proper order- ing in the matrix, LU factorization may fail to occur. The flaw ∗Corresponding author tel. no: +60 163725820 Email address: dlaljumabashir@yahoo.com (Dlal Bashir) can be removed by reordering the rows of B so that the first ele- ment of the permuted matrix is nonzero [3]. Thus, a proper per- mutation in rows or columns is sufficient for the LU factoriza- tion to be numerically stable [4, 5]. LU factorization is known to be implemented in LAPACK library to exploit the standard software library architectures [6]. To improve the efficiency of computation during factorization, an alternative technique to LU factorization was developed, named Quadrant Interlocking Factorization - QIF [7]. Factorization of matrix B is difficult to compute and applying different optimization techniques couple with parallelism of contemporary computers makes QIF fac- torization extremely efficient and suitable for parallel comput- ing. Among others factorization techniques such as LU , QR and Cholesky decomposition, QIF proves to be the best fac- torization algorithm in terms of efficiency, parallelization and accuracy [8, 9]. 1 Bashir et al. / J. Nig. Soc. Phys. Sci. 5 (2023) 1112 2 2. List of abbreviations and symbols For the readers to understand this review paper, the symbols and abbreviation used in the article are given in the sections as follows: 2.1. List of abbreviations QIF Quadrant Interlocking Factorization LU Lower and Upper triangular matrices Q Orthogonal matrix LAPACK Linear Algebra Package PIE Parallel Implicit Elimination GE Gaussian elimination OpenMP Open Multi-Processing GPU Graphics processing unit CUDA Compute Unified Device Architecture) EDK HW/SW Embedded Development Kit hardware/software P Permutation matrix AMD Advanced Micro Devices Intel Integrated electronics. MATLAB Matrix laboratory GGH Goldreich-Goldwasser-Halevi encryption scheme 2.2. List of symbols R The set of real numbers Det(B) Determinant of matrix B ≥ greater than or equal to ≤ less than or equal to , Not equal to BT Transpose of matrix B n ∑ i=1 Xi The finite sum of spaces n ∏ i=1 Xi The finite product of spaces bBc Floor of B dBe Ceiling of B ||B|| Norm of matrix B H Hourglass matrix HT (nz) Total number of nonzero entries in H HT (z) Total number of zero entries in H fm Filanz submatrix of H lim n→∞ f (n) Limit of a function as n tends to infinity λ Lambda G Graph V Vertex E Edge EA(G) Energy of a graph |B| Modulus of B F2 Friendship graph G Mixed hourglass graph M(G ) Mixed hourglass-adjacency matrix EM(G ) Mixed energy of a mixed hourglass graph Ak Number of arcs in G . 3. Quadrant interlocking factorization Quadrant interlocking factorization (QIF ) or butterfly fac- torization of nonsingular matrix B was coined by Evans and Hatzopoulos [7]. In 1979, Evans and Hatzopoulos [10, 11] gave details of the factorization as an alternative to LU factorization and the avoidance of breakdown of QIF algorithm. The fac- torization breaks up matrices to structural forms which are then regrouped and solved as sub-blocks [12]. QIF is known for the adaptability of its direct method to solve systems of linear equa- tions. Thus, the factorization gives rise to the use of Parallel implicit elimination (PIE) for the solution of linear system to simultaneously compute two matrix elements (two columns at a time) for parallel implementation, unlike Gaussian elimination (GE) which computes one column at a time [13]. The stability of QIF comes from the centro-nonsingular matrix (central sub- matrices are nonsingular) which is far reliable than any other type of factorization [8]. 3.1. W Z factorization W -matrix (bow-tie matrix) and Z-matrix exists together in W Z factorization of nonsingular matrix B [14, 15]. Z-matrix and W -matrix are well-known as interlocking quadrant factors of B having butterfly shape of the form W =  1 ◦ • 1 • • ◦ 1 ◦ • • ◦ ◦ 1 ◦ ◦ • • ◦ • 1 • ◦ • • • 1 • • • 1 • ◦ 1   and Z =   • • • • • • • • ◦ ◦ ◦ ◦ ◦ • ◦ ◦ ◦ • ◦ • • ◦ • ◦ ◦ ◦ • ◦ ◦ ◦ ◦ ◦ • • • • • • • •   such that B = W Z. (1) Z-matrix and W -matrix of order n (n ≥ 3) are generally defined as [10, 16] Z =   (0,...,0︸ ︷︷ ︸ i−1 ,zi,i,...,zi,n−i+1,0,...,0)T , i = 1,...,b (n+1) 2 c; (0,...,0︸ ︷︷ ︸ n−i ,zi,n−i+1,...,zi,i,0,...,0)T , i =b (n+1) 2 c+ 1,...,n (2) 2 Bashir et al. / J. Nig. Soc. Phys. Sci. 5 (2023) 1112 3 W =   (1,0,...,0︸ ︷︷ ︸ n−1 ); (wi,1,...,wi,i−1,1,0,...,0︸ ︷︷ ︸ n−2i+1 ,wi,n−i+2,...,wi,n), i = 2,...,b (n+1) 2 c; (wi,1,...,wi,n−i,0,...,0︸ ︷︷ ︸ 2i−n−1 ,1,wi,i+1,...,wi,n), i =b (n+1) 2 c+ 1,...,n−1; (0,...,0︸ ︷︷ ︸ n−1 ,1). (3) In W Z factorization, there are bn2−1c ∑ k=1 (n−2k) of 2×2 linear systems to be solved which account for the elements in W -matrix and Z-matrix [17]. The direct method to solve the linear systems of QIF under the nonsingularity constraint presumed for their deter- minants solely depends on a conventional method called Cramer’s rule. The unique solution (x1,x2,...,xn) provided by Cramer’s rule [18] to the system Bxi = c, (4) is xi = det(Bi|c) det(B) , (5) where, det(B) , 0, B = (bi, j) 1 ≤ i, j ≤ n , xi = (x1,...,xn)T , c = (c1,...,cn)T ; x,c ∈ Rn, B ∈ Rn×n. Bi|c is the matrix obtained from B by substituting the vector column of c to the ith column of B, for i = 1,2,...,n. 3.1.1. W Z factorization algorithm For the W Z factorization, matrix B is nonsingular. However, if central matrix of B is singular then interchange columns or rows of the matrix by suitable permutation to avoid breakdown of the factorization method, else the factorization breakdown. The establishment of elements in W -matrix (with 1’s in its main diagonal and 0’s in the antidiagonal), column ith and (n−1)th are obtained by solving simultaneous equation via Cramer’s rule which requires matrix B to be successfully updated and this update changes matrix B to Z-matrix [19, 20]. The matrix update of W Z factorization indicates the most time consuming part of the factorization. The steps to obtain Z-matrix is as follows: Step 1: Let B(0) = Z(0)for initial update and obtain the first and last rows of Z-matrix as b(0)1,1 = z (0) 1,1, b (0) 1,i = z (0) 1,i , b (0) 1,n = z (0) 1,n, b (0) n,1 = z(0)n,1, b (0) n,i = z (0) n,i , b (0) n,n = z (0) n,n, where i = 2,...,n−1. Now, we compute w (1) i,1 and w (1) i,n from (n−2) sets of 2×2 linear system in Equation (6) of matrix B using Cramer’s rule z (0) 1,1w (1) i,1 + z (0) n,1w (1) i,n =−z (0) i,1 z(0)1,nw (1) i,1 + z (0) n,nw (1) i,n =−z (0) i,n . (6) The values of w(1)i,1 and w (1) i,n are put in matrix form as: W (1) =   1 0 ··· 0 0 w(1)2,1 1 ... ... w(1)2,n ... 0 ... 0 ... w(1)n−1,1 ... ... 1 w(1)n−1,n 0 0 ··· 0 1   . Step 2: We update matrix B (let B(1) = Z(1) for the first update) and compute: Z(1) = W (1)Z0. (7) 3 Bashir et al. / J. Nig. Soc. Phys. Sci. 5 (2023) 1112 4 We, therefore, proceed analogously for the inner square matrices of Z(1) of size (n−2) and so on. Step 3: Next, we compute w(k)i,k and w (k) i,n−k+1 from Equation (8) by solving its 2×2 linear equations using Cramer’s rule, where k = 1,2,..., n2 −1; i = k + 1,...,n−k. z (k−1) k,k w (k) i,k + z (k−1) n−k+1,kw (k) i,n−k+1 =−z (k−1) i,k z(k−1)k,n−k+1w (k) ik + z (k−1) n−k+1,n−k+1w (k) i,n−k+1 =−z (k−1) i,n−k+1. (8) Then, we put the values of w(k)i,k and w (k) i,n−k+1 in a matrix form as: W (k) =   1 w(k)k+1,k ... w(k)k+1,n−k+1 ... ... ... w(k)n−1,k ... w(k)n−k,n−k+1 1   . Step 4: We further compute for kth such successful steps as: Z(k) = W (k)Z(k−1). (9) To arrive at the Z-matrix, we let Z(k) = Z. Thus, Z =   z(0)1,1 z (0) 1,2 ··· ··· ··· z (0) 1,n−1 z (0) 1,n 0 ... ... ··· ... ... 0 0 0 z(k−1)k,k z (k−1) k,n+1−k 0 0 ... ··· ... ... ... ··· ... 0 0 z(k−1)n+1−k,k z (k−1) n+1−k,n+1−k 0 0 0 ... ... ··· ... ... 0 z(0)n,1 z (0) n,2 ··· ··· ··· z (0) n,n−1 z (0) n,n   . A complete one-stage in W Z factorization is when Z(k−1) is computed. However, the factorization requires b(n−1)2 c stages to compute all the elements of the matrix W and Z [10]. After the algorithm of W Z factorization is established, the following theorems were put forward: Theorem 3.1. Factorization Theorem [8]. Let B ∈ Rn×n be a nonsingular matrix that has a unique QIF factorization, then B = W Z if and only if the submatrices of B are invertible. Theorem 3.2. [8]. If B ∈ Rn×n is nonsingular matrix, then there exist a row permutation matrix P for QIF to be carried out with pivoting such that PB = W Z. Corollary 3.3. [8]. Every symmetric positive definite and strictly diagonally dominant matrix has a QIF . Theorem 3.4. [21]. Let B be nonsingular tri-diagonal diagonally dominant, then its factored Z-matrix from QIF factorization is also tri-diagonal diagonally dominant. Due to its uniqueness, W Z factorization exists for every nonsingular matrix often with pivoting [22, 8]. Pivoting results in swapping rows or columns in a matrix or by multiplying the matrix with permutation matrix which improves the numerical stability of W Z factorization [1]. W Z factorization will not fail without pivoting if the matrix is real symmetric, positive definite or diagonally dominant, see [23, 8, 24]. The matrix norm of W Z factorization is the Frobenius norm given as [25] ‖B−W Z‖F = √√√√√   n∑ i=1 n ∑ j=1 |bi, j −wi, j zi, j|  . (10) The numerical accuracy ( −log10 ‖B−W Z‖ n·‖B‖ ) of W Z factorization based on the relative error depends on the matrix norms [16]. Thus, the efficiency of W Z factorization depends on an efficacious use of the memory echelon (processors array). If there is no sufficient fast memory, then the processor will create waiting time for the data and thereby reducing its efficiency [26]. 4 Bashir et al. / J. Nig. Soc. Phys. Sci. 5 (2023) 1112 5 3.1.2. Importance of W Z factorization W Z factorization has been applied in modeling of Markov chains aside its parallelization usage [27]. W Z factorization offers parallelization in solving both sparse and dense linear system to enhance performance using OpenMP, GPU, CUDA or EDK HW/SW codesign architecture [28, 29, 12]. Then, Yalamov [24] presented that W Z factorization is faster on com- puter with a parallel architecture than any other matrix factor- ization methods. The factorization has been applied in sci- entific computing - especially in science and engineering, see [30, 31, 27, 32, 33]. The block W Z factorization is discussed in [34, 15, 23, 26] where the Z-matrix is divided into r2 block each of the size s×s and n = r×s. Theorem 3.5. [26] The block W Z factorization exists if the matrix B has a strict dominant diagonal. Z-matrix of even order (also applicable to odd order) can be partitioned to form structured Zsystem of 2×2 triangular block matrices which is defined as Zsystem =   [ Z1,1 Z1,2 Z2,1 Z2,2 ] if n is even;  Z1,1 x1 Z1,2 0 x 0 Z2,1 x2 z2,2   if n is odd. (11) Each block contains n2 × n 2 block size if n is even dimension while additional column vector, x̃, position at n+12 th column of the matrix if n is odd dimension [26]. This column vector, x̃, can be further partitioned into x1, x and x2. Then, the Schur complement of a matrix block in Equation (11) is defined as follows Zsystem/Z1,1 = Z2,2 −Z2,1Z−11,1 Z1,2. (12) Theorem 3.6. [26] If the Zsystem and the matrix Z1,1 are invert- ible then the matrix ( Z2,2 −Z2,1Z−11,1 Z1,2 ) is a lower triangular invertible matrix. 3.1.3. W Z factorization versus LU factorization W Z factorization proves to be better on Intel processors than on AMD processors [16]. Even though W Z factorization and LU factorization have similar computational complexity, the W Z factorization still shown to be better than LU factor- ization (except block LU factorization) irrespective of the ver- sion of MATLAB or the number of processors used [35, 16]. However, for a uniprocessor, W Z factorization does not exhibit any advantage over LU factorization since every step performed is in serial [12]. While LU factorization performs elimination in serial with n−1 steps, W Z factorization executes compo- nents in parallel with n2 steps if n is even or n−1 2 steps if n is odd. W Z factorization simultaneously computes two matrix el- ements (two columns at a time), unlike LU factorization which computes one column at a time [13, 12, 36]. Unlike W Z factor- ization, LU factorization is not unique but block LU factoriza- tion with higher diagonal blocks gives similar analytic result as W Z factorization [37]. For larger matrices, the stimulation time on the multiprocessor show that the W Z factorization is faster than LU factorization appears to be 20% for all values of pro- cessor [38, 39]. Aside not being better than W Z factorization on parallel com- puter or mesh multiprocessors, LU factorization does not ac- count for the property of centrosymmetric matrix in its fac- tors [40]. For sparse matrices, LU and W Z factorization gen- erate approximately similar number of non-zero elements [41]. Though, LU factorization relies on leading principal submatri- ces ([ bi j ]k i,k=1 , l = 1,...,n ) whereas W Z factorization relies on nonsingular central submatrices ([ b jk ]n+1−l j,k=l , l = 1,..., [ n+1 2 ]) [15]. Based on the form of matrices, incomplete W Z precondi- tioning gives better results than the incomplete LU factorization [26]. Although some parts of the equation in LU factorization that consist many loops can be parallelized, the W Z factoriza- tion is uniquely known for its ability to offer parallelization. Even if W Z factorization and LU factorization are both imple- mented on OpenMP, the W Z factorization performs better in ex- ecution time than LU factorization when the number of thread increases [42, 43]. 3.2. W H factorization Demeure [44] first posited the term hourglass matrix in de- scribing a matrix derived from factorizing a square matrix via quadrant interlocking factorization or bowtie-hourglass factor- ization. It was further elucidated that hourglass matrix is syn- onymous to Z-matrix which can be partitioned into blocks struc- tured Z-system [15, 28]. Unfortunately, there are changes in structure of Z-matrix from W Z factorization which depend on the type of matrix (Toeplitz, Hankel, Hermitian, centrosym- metric, diagonally dominant or tridiagonal matrix) being fac- torized. Nevertheless, Z-matrix may not always imply hour- glass matrix nor their applications are always indistinguishable. Consequently, the notion of sameness between hourglass matrix and Z-matrix was gradually dropped over time without a cogent reason. Recently, Babarinsa and Kamarulhaili [45] gave metic- ulous details of hourglass matrix, denoted as (H-matrix), and its quadrant interlocking factorization by restricting the computed entries of the factorization to be nonzero in accordance with the shape of hourglass device. Definition 3.1. [45] Let H be an hourglass matrix of order n (n ≥ 3), then hourglass matrix is defined as H =   hi, j 1 ≤ i ≤b (n+1) 2 c, i ≤ j ≤ n + 1−i; hi, j d (n+2) 2 e≤ i ≤ n, n + 1−i ≤ j ≤ i; 0 otherwise, (13) where hi, j ∈R are strictly nonzero elements. In other words, hourglass matrix is a nonsingular matrix of order n (n ≥ 3) with nonzero entries from the ith to the (n−i + 5 Bashir et al. / J. Nig. Soc. Phys. Sci. 5 (2023) 1112 6 1) element of the ith and (n− i + 1) row of the matrix, other- wise 0’s, for i = 1,2,...,bn+12 c. To buttress the shape of hour- glass matrix, Figure 1 illustrates the structural comparison be- tween the hourglass device and hourglass matrix with nonzero elements denoted with black dots, otherwise 0’s. Figure 1. Structural comparison between hourglass device and hourglass ma- trix. The factorization algorithm to obtain hourglass matrix is known as W H factorization. Like the factorization of Z-matrix, the factorization of hourglass matrix requires W -matrix (see the form of W -matrix in equation 3) to be computed during the fac- torization process. During W H factorization of nonsingular ma- trix B, H-matrix exists together with W -matrix, such that B = W H. (14) The matrix norm, numerical accuracy and efficiency of W H factorization algorithm is similar to W Z factorization. The fac- torization of hourglass matrix and Z-matrix are quite similar yet the factorization for hourglass matrix restricts the computed entries to be nonzero at every stage during the factorization. However, the QIF of hourglass matrix specifies the number of times row-interchange can be done at each stage of the factor- ization if the computed entries yield zero, else the factorization breakdown. 3.2.1. W H factorization algorithm The sequential steps for the factorization are as follows [45]: Step 1: Let B = H(0) for initial update and check if the first row ( h(0)1, j ) and last row ( h(0)n, j ) of H(0) contains zero. If h(0)1, j = 0 or h(0)n, j = 0, then use suitable row-interchange in H (0), where j = 1,2,...,n. Then, we compute w(1)i,1 and w (1) i,n in Equation (15) from matrix H(0) by solving 2×2 system of linear equations via Equation 5 h (0) 1,1w (1) i,1 + h (0) n,1w (1) i,n =−h (0) i,1 ; h(0)1,nw (1) i,1 + h (0) n,nw (1) i,n =−h (0) i,n , (15) Whenever h(0)n,nh (0) 1,1 −h (0) 1,nh (0) n,1 = 0 use suitable row-interchange to avoid factorization breakdown. Then the values of w(1)i,1 and w(1)i,n can be written in W -matrix as: W (1) =   1 0 ··· 0 0 w(1)2,1 1 ··· ... w(1)2,n ... 0 ... 0 ... w(1)n−1,1 ... ··· 1 w(1)n−1,n 0 0 ··· 0 1   . Step 2: We, therefore, update matrix H(0) to H(1) for the first update by evaluating its entries as h(1)i, j = h (0) i, j + w (1) i,1 h (0) 1, j + w (1) i,n h (0) n, j, (16) If one of the computed entry h(1)2, j = 0 or h (1) n−1, j = 0 in Equa- tion (16), then apply row-interchange in H(1) at h(1)i, j for i = 2,...,n−1 and j = 1,...,n in no more than (n−4) times, else the factorization breakdown. Step 3: Now, we compute w(k)i,k and w (k) i,n−k+1 from matrix H(k−1) by solving 2×2 linear systems in Equation (17) to gen- eralize for every update of H(k) and proceed similarly for the inner square matrices of size (n−2k) and so on. That is, h (k−1) k,k w (k) i,k + h (k−1) n−k+1,kw (k) i,n−k+1 =−h (k−1) i,k ; h(k−1)k,n−k+1w (k) i,k + h (k−1) n−k+1,n−k+1w (k) i,n−k+1 =−h (k−1) i,n−k+1, (17) where k = 1,2,...,bn−12 c; i = k + 1,...,n−k. Whenever h(k−1)n−k+1,n−k+1h (k−1) k,k −h (k−1) n−k+1,kh (k−1) k,n−k+1 = 0 use suitable row-interchange to avoid factorization breakdown. Then, we put the values w(k)i,k and w (k) i,n−k+1 in a W -matrix of the form W (k) =  1 0 ... ... 1 0 w(k)k+1,k ... ... w(k)k+1,n−k+1 ... ... ... w(k)n−1,k ... ... w(k)n−k,n−k+1 0 1 ... ... 0 1   . Step 4: We finally compute for kth steps of h(k)i, j as: h(k)i, j = h (k−1) i, j + w (k) i,k h (k−1) k, j + w (k) i,n−k+1h (k−1) n−k+1, j, (18) where j = k + 1,...,n−k. From Equation (18), if one of the computed entries is zero, then apply possible row-interchange in no more than (n−2k) times in H(k−1) and re-factorize, else the factorization breakdown to produce H k (H-matrix). After the successful kth steps we get hourglass matrix (H(k) = H) of the form: 6 Bashir et al. / J. Nig. Soc. Phys. Sci. 5 (2023) 1112 7 H =   h(0)1,1 h (0) 1,2 h (0) 1,3 ··· ··· ··· ··· ··· h (0) 1,n−2 h (0) 1,n−1 h (0) 1,n 0 h(1)2,2 h (1) 2,3 ··· ··· ··· ··· ··· h (1) 2,n−2 h (1) 2,n−1 0 0 0 h(2)3,3 ··· ··· ··· ··· ··· h (2) 3,n−2 0 0 ... 0 0 ... ... ... ... ... 0 0 ... ... ... ... h(k−1)k,k ··· h (k−1) k,n−k+1 ... ... ... 0 0 0 0 ... ... 0 0 0 0 ... ... ... h(k−1)n−k+1,k ··· h (k−1) n−k+1,n−k+1 ... ... ... ... 0 0 ... ... ... ... ... 0 0 ... 0 0 h(2)n−2,3 ··· ··· ··· ··· ··· h (2) n−2,n−2 0 0 0 h(1)n−1,2 h (1) n−1,3 ··· ··· ··· ··· ··· h (1) n−1,n−2 h (1) n−1,n−1 0 h(0)n,1 h (0) n,2 h (0) n,3 ··· ··· ··· ··· ··· h (0) n,n−2 h (0) n,n−1 h (0) n,n   . If there exists row-interchange at any stage k that yields permutation matrix P, then H = ( W (k−1)P(k−1)W (k−2)P(k−2)...W (2)P(2)W (1)P(1) )−1 B. (19) Recall that k = 1,2,...,bn−12 c and that there are b n−1 2 c stages in the factorization. From every successful loops i, j = k + 1,k + 2,...,n−k for each stage, there are (n−2k) simultaneous equations each to be solved in (n−2k) times during the factorization using Cramer’s rule. To avoid breakdown at its filanz submatrices (see Definition 3.3), there must be row-interchange at every stage of the factorization process. This ensures that the 2×2 submatrix has the least condition number adopting any matrix norm. The overall time of W H factorization algorithm may increase if there is row-interchange at every stage of the factorization process due to the moving and sorting of data in and out of the processor. 3.2.2. On hourglass matrix Proposition 3.7. [45] Let H, HT (nz) and HT (z) be an hourglass matrix of order n (n ≥ 3), the total number of nonzero entries, and the total number of zero entries in hourglass matrix respectively. Then, HT (nz) = n2 + 2n− ∣∣(n + 1) mod 2−1∣∣ 2 (20) and HT (z) = n2 −2n + ∣∣(n + 1) mod 2−1∣∣ 2 . (21) Definition 3.2. [45] Filanz submatrix, denoted as f 1≤i≤dn−12 e m , is a 2×2 non-singular matrix obtained by taking the first and the last nonzero elements of the ith and (n + 1−i)th row of H-matrix given as f 1≤i≤dn−12 e m =   h(i−1)i,i h(i−1)i,n+1−i h(i−1)n+1−i,i h (i−1) n+1−i,n+1−i   1≤i≤dn−12 e (22) Definition 3.3. [45] Epicenter element, denoted as h n+1 2 , n+1 2 is the nonzero element located at the intersection of ( n+12 ) row and ( n+12 ) column of hourglass matrix of odd order. Proposition 3.8. [45] Let det(H) be the determinant of hourglass matrix of order (n ≥ 3) Then, det (H) =   dn−12 e ∏ i=1 ∣∣∣∣∣∣ h (i−1) i,i h (i−1) i,n+1−i h(i−1)n+1−i,i h (i−1) n+1−i,n+1−i ∣∣∣∣∣∣ if n is even; h( n+1 2 , n+1 2 ) dn−12 e ∏ i=1 ∣∣∣∣∣∣ h (i−1) i,i h (i−1) i,n+1−i h(i−1)n+1−i,i h (i−1) n+1−i,n+1−i ∣∣∣∣∣∣ if n is odd. (23) The determinant of matrix B can be computed as det(B) = det ( W (k−1) ·P(k−1) ·····W (2) ·P(2) ·W (1) ·P(1)· )−1 H. (24) 7 Bashir et al. / J. Nig. Soc. Phys. Sci. 5 (2023) 1112 8 where det ( W (k−1) ·····W (2) ·W (1) )−1 = 1 and det ( P(k−1) ·····P(2) ·P(1) )−1 = (−1)pn . But (−1)pn = { 1 if even number of rows are interchanged, −1 if odd number of rows are interchanged. Therefore, det(B) = (−1)pn det (H) (25) where pn is the total number of permutation matrix (successful row interchange) occurs in the factorization. Partitioning of hourglass matrix of order n (n > 3) into 2×2 block triangular matrices with each block containing bn2c×b n 2c matrices is called Hsystem. That is Hsystem =   [ H1,1 H1,2 H2,1 H2,2 ] if n is even;  H1,1 x1 H1,2 0 x 0 H2,1 x2 H2,2   if n is odd. (26) Where H1,2 = { hi j, 1 ≤ i ≤dn−12 e, b n+3 2 c≤ j ≤ n + 1−i; 0, otherwise. H2,2 = { hi j, bn+32 c≤ i ≤ n, n + 1−i ≤ j ≤d n−1 2 e; 0, otherwise. H1,1 = { hi j, 1 ≤ i ≤dn−12 e, i ≤ j ≤d n−1 2 e; 0, otherwise. H2,1 = { hi j, bn+32 c≤ i ≤ n, b n+3 2 c≤ j ≤ i; 0, otherwise. x1 = { hi j, 1 ≤ i ≤ n−12 , j = n+1 2 . x2 = { hi j, n+3 2 ≤ i ≤ n, j = n+1 2 . x = { hi j, i = n+1 2 , j = i . Theorem 3.9. [46] Schur complement exists in Hsystem only if H-matrix is nonsingular. 3.3. Comparison between W Z factorization (or Z-matrix) and W H factorization (or H-matrix) The W Z factorization is possible provided the submatrices of the nonsingular matrix are invertible, while W H factorization does not only depend on the invertibility of the submatrices but also that the elements in the first row and in the last row of its submatrix are nonzero. If assume the entries hi, j is analogous to zi, j , then Z-matrix will imply hourglass matrix provided that the computed z(k−1)i, j and z (k−1) n, j are strictly nonzero, for k = 1,2,...,b n−1 2 c. However, the entries of Z-matrix are unbound to be nonzero. Then it is obvious that it will no longer be an hourglass matrix if one of its strictly nonzero elements is replaced with zero. In general, every H-matrix is a Z-matrix but the converse is not true, depicted in Figure 2. H-matrix Z-matrix Figure 2. H-matrix as a subset of Z-matrix. 8 Bashir et al. / J. Nig. Soc. Phys. Sci. 5 (2023) 1112 9 The W Z factorization exists for every nonsingular matrix often with pivoting while W H factorization may fail to exist even if the matrix is nonsingular. Unlike the factorization of Z- matrix, the factorization of an hourglass matrix from a nonsin- gular matrix may not necessarily be from a symmetric positive definite or diagonally dominant matrix but definitely not from a tridiagonal matrix. W Z factorization may work with large di- mension sparse matrices whereas W H factorization will not. In Hsystem, each block has specific number of zero and nonzero entries, unlike Zsystem. 3.3.1. Unique properties of hourglass matrix The entries in H-matrix are linearly independent which make the matrix nonsingular. The transpose of hourglass matrix does not retain the shape of the matrix but rather form a bowtie ma- trix or butterfly matrix. Inverse and nth root of hourglass matrix is again hourglass matrix. The minimum order of hourglass ma- trix is 3 and the matrix cannot be a symmetric. Regardless of order of hourglass matrix, the total number of zero entries is even. Besides, hourglass matrix has minimum matrix density of 0.5 as lim n→∞ n2+2n−|(n+1) mod 2−1| 2 n2 . 3.3.2. Application of W H factorization and hourglass matrix W H factorization has great tendency in usage than W Z fac- torization. First, though with little evidence it has been pro- posed that the linearly independent columns of hourglass ma- trix forming the basis vectors of a lattice will make it suitable for lattice-base cryptography, especially in GGH - Goldreich- Goldwasser-Halevi encryption scheme, see [47, 46]. The usage of hourglass matrix is expected to be able to reduce the size of bases. This reduction will allow the GGH Scheme to be imple- mented in higher lattice dimension while still being able to be efficient and practical, and the generation of hourglass matrix can be executed in polynomial time. Lastly, unlike Z-matrix, hourglass matrix has be represented as weighted mixed graph called mixed hourglass graph [48]. We know that a simple graph G = (V,E) is an ordered pair consisting of a set of vertices V = {v1,v2,...,vn} and a set of undirected edges E ={e1,e2,...,en}, no loops nor multiple edges permitted [49, 50]. A mixed graph G = (V,E,A) is an ordered triple consisting of a set of vertices V = v1,v2,...,vn, a set of undirected edges E = {e1,e2,...,en}, and a set of directed arcs A [51]. Now, an hourglass graph (butterfly graph) is a planar undirected graph formed by at least two triangles intersecting in a single vertex, especially from 5-vertex graph of two k3’s or from friendship graph F2, see for examples [52, 53, 54]. How- ever, a mixed hourglass graph is a mixed complete graph coined from the name of its mixed adjacency matrix which is obtained from hourglass matrix [55]. Definition 3.4. [48] A mixed hourglass graph G = (V,E,A) is an ordered triple consisting of a set of vertices V ={v1,v2,...,vn}, a set of undirected edges E = {e1,e2,...,en}, and a set of di- rected arcs A. Definition 3.5. [55] A mixed hourglass-adjacency matrix M(G ) of a mixed hourglass graph G is the n×n(n≥3) matrix M(G )n×n = (hi, j)n×n defined by M(G ) =   1 if viv j is an edge ; −1 if vi,v j is an arc; 0 otherwise. (27) Lemma 3.10. [55] For every mixed hourglass graph G of or- der n, the number of undirected edges m is m = n−β 2 , (28) where β = ∣∣(n + 1) mod 2−1∣∣. Theorem 3.11. [55] Let EM(G ) be the mixed energy of a mixed hourglass graph G of order n and λi(G ) = {λ1,λ2,...,λn} be the mixed eigenvalues of a mixed hourglass-adjacency matrix M(G ). Then EM(G ) = n ∑ i=1 |λi(G )|= n−β , (29) where β = ∣∣(n + 1) mod 2−1∣∣. 3.4. Numerical example of W Z and W H factorization Given a dense nonsingular square matrix B of order 6, apply both W Z and W H factorization algorithm on the matrix. B =   2 0 2 4 3 −1 5 10 −7 8 11 4 0 −12 9 6 18 1 −13 12 8 −20 14 17 3 1 1 −1 1 4 10 6 9 −13 10 14   . 3.5. W Z factorization of matrix B Step 1: Let b(0)i, j = z (0) i, j , for i, j = 1,...,6. We now compute the set of 2×2 system of linear equations from z (k−1) k,k w (k) i,k + z (k−1) n−k+1,kw (k) i,n−k+1 =−z (k−1) i,k z(k−1)k,n−k+1w (k) i,k + z (k−1) n−k+1,n−k+1w (k) i,n−k+1 =−z (k−1) i,n−k+1. For k = 1,...,bn−12 c= 2. If k = 1 then we have z (0) 1,1w (1) i,1 + z (0) n,1w (1) i,n =−z (0) i,1 z(0)1,nw (1) i,1 + z (0) n,nw (1) i,n =−z (0) i,n . Whenever i = 2, then z (0) 1,1w (1) 2,1 + z (0) 6,1w (1) 2,6 =−z (0) 2,1 z(0)1,6w (1) 2,1 + z (0) 6,6w (1) 2,6 =−z (0) 2,6 ⇒ 2w (1) 2,1 + 10w (1) 2,6 = −5 −w(1)2,1 + 14w (1) 2,6 = −4 ⇒  w (1) 2,1 = − 15 19 w(1)2,6 = − 13 38 9 Bashir et al. / J. Nig. Soc. Phys. Sci. 5 (2023) 1112 10 Whenever i = 3, then  z (0) 1,1w (1) 3,1 + z (0) 6,1w (1) 3,6 =−z (0) 3,1 z(0)1,6w (1) 3,1 + z (0) 6,6w (1) 3,6 =−z (0) 3,6 ⇒ 2w (1) 3,1 + 10w (1) 3,6 = 0 −w(1)3,1 + 14w (1) 3,6 = −1 ⇒  w (1) 3,1 = 5 19 w(1)3,6 = − 1 19 Whenever i = 4, then  z (0) 1,1w (1) 4,1 + z (0) 6,1w (1) 4,6 =−z (0) 4,1 z(0)1,6w (1) 4,1 + z (0) 6,6w (1) 4,6 =−z (0) 4,6 ⇒ 2w (1) 4,1 + 10w (1) 4,6 = 13 −w(1)4,1 + 14w (1) 4,6 =−17 ⇒  w (1) 4,1 = 176 19 w(1)4,6 = − 21 38 Whenever i = 5, then  z (0) 1,1w (1) 5,1 + z (0) 6,1w (1) 5,6 =−z (0) 5,1 z(0)1,6w (1) 5,1 + z (0) 6,6w (1) 5,6 =−z (0) 5,6. ⇒ 2w (1) 5,1 + 10w (1) 5,6 = −3 −w(1)5,1 + 14w (1) 5,6 = −4. ⇒  w (1) 5,1 = − 1 19 w(1)5,6 = − 11 38 Therefore, we write the values of w(1)i,1 and w (1) i,n in a matrix form as W (1) =   1 0 0 0 0 0 −1519 1 0 0 0 − 13 38 5 19 0 1 0 0 − 1 19 176 19 0 0 1 0 − 21 38 − 119 0 0 0 1 − 11 38 0 0 0 0 0 1   Step 2: We update z0i, j to z 1 i, j by computing the entries as z(k)i, j = z (k−1) i, j + w (k) i,k z (k−1) k, j + w (k) i,n−k+1z (k−1) n−k+1, j ⇒ z (1) i, j = z(0)i, j + w (1) i,1 z (0) 1, j + w (1) i,6 z (0) 6, j. When i = 2 and j = 2,3,4,5, so z(1)2,2 = z (0) 2,2 + w (1) 2,1z (0) 1,2 + w (1) 2,6z (0) 6,2 = 10 + ( −1519 ) (0)+ ( −1338 ) (6) = 15119 z(1)2,3 = z (0) 2,3 + w (1) 2,1z (0) 1,3 + w (2) 2,6z (0) 6,3 = −7 + ( −1519 ) (2)+ ( −1338 ) (9) =−44338 z(1)2,4 = z (0) 2,4 + w (1) 2,1z (0) 1,4 + w (4) 2,6z (0) 6,4 = 8 + ( −1519 ) (4)+ ( −1338 ) (−13) = 35338 z(1)2,5 = z (0) 2,5 + w (1) 2,1z (0) 1,5 + w (3) 2,6z (0) 6,5 = 11 + ( −1519 ) (3)+ ( −1338 ) (10) = 9919 When i = 3 and j = 2,3,4,5, then z(1)2,2 = z (0) 3,2 + w (1) 3,1z (0) 1,2 + w (1) 3,6z (0) 6,2 = −12−( 519 )(0)+ ( − 119 ) (6) =−23419 z(1)3,3 = z (0) 3,3 +w (1) 3,1z (0) 1,3 +w (1) 3,6z (0) 6,3 = 9−( 5 19 )(2)+ ( − 119 ) (9) = 17219 z(1)3,4 = z (0) 3,4 + w (1) 3,1z (0) 1,4 + w (1) 3,6z (0) 6,4 = 6−( 519 )(4)+ ( − 119 ) (−13) = 14719 z(1)3,5 = z (0) 3,5 + w (1) 3,1z (0) 1,5 + w (1) 3,6z (0) 6,5 = 18−( 519 )(3)+ ( − 119 ) (10) = 34719 When i = 4 and j = 2,3,4,5, we have z(1)4,2 = z (0) 4,2 + w (1) 4,1z (0) 1,2 + w (1) 4,6z (0) 6,2 = 12 + ( 176 19 ) (0)+ ( −2138 ) (6) = 16519 z(1)4,3 = z (0) 4,3 + w (1) 4,1z (0) 1,3 + w (1) 4,6z (0) 6,3 = 8 + ( 176 19 ) (2)+ ( −2138 ) (9) = 819 38 z(1)4,4 = z (0) 4,4 + w (1) 4,1z (0) 1,4 + w (1) 4,6z (0) 6,4 = −20 + ( 176 19 ) (4)+ ( −2138 ) (−13) = 92138 z(1)4,5 = z (0) 4,5 + w (1) 4,1z (0) 1,5 + w (1) 4,6z (0) 6,5 = 14 + ( 176 19 ) (3)+ ( −2138 ) (10) = 68919 When i = 5 and j = 2,3,4,5, then z(1)5,2 = z (0) 5,2 + w (1) 5,1z (0) 1,2 + w (1) 5,6z (0) 6,2 = 1 + ( − 119 ) (0)+ ( −1138 ) (6) =−1419 z(1)5,3 = z (0) 5,3 + w (1) 5,1z (0) 1,3 + w (1) 5,6z (0) 6,3 = 1 + ( − 119 ) (2)+ ( −1138 ) (9) =−6538 z(1)5,4 = z (0) 5,4 + w (1) 5,1z (0) 1,4 + w (1) 5,6z (0) 6,4 = −1 + ( − 119 ) (4)+ ( −1138 ) (−13) = 9738 z(1)5,5 = z (0) 5,5 + w (1) 5,1z (0) 1,5 + w (1) 5,6z (0) 6,5 = 1 + ( − 119 ) (3)+ ( −1138 ) (10) =−3919 Thus, Z(1) =   2 0 2 4 3 −1 0 15119 −443 38 353 38 99 19 0 0 −23419 172 19 147 19 347 19 0 0 16519 819 38 921 38 689 19 0 0 −1419 −65 38 97 38 −39 19 0 10 6 9 −13 10 14   . 10 Bashir et al. / J. Nig. Soc. Phys. Sci. 5 (2023) 1112 11 Step 3: Now, we can compute the next set of 2×2 systems of linear equation from the entries in z1i, j . Let k = 2, then z (1) 2,2w (2) i,2 + z (1) n−1,2w (2) i,n−1 =−z (1) i,2 z(1)2,n−1w (2) i,2 + z (1) n−1,n−1w (2) i,n−1 =−z (1) i,n−1. Whenever i = 3, then z (1) 2,2w (2) 3,2 + z (1) 5,2w (2) 3,5 =−z (1) 3,2 z(1)2,5w (2) 3,2 + z (1) 5,5w (2) 3,5 =−z (1) 3,5 ⇒  151 19 w (2) 3,2 +(− 14 19 )w (2) 3,5 = 234 19 99 19 w (2) 3,2 +(− 39 19 )w (2) 3,5 = − 347 19 ⇒  w (2) 3,2 = 13985 4503 w(2)3,5 = 75563 4503 Whenever i = 4, then z (1) 2,2w (2) 4,2 + z (1) 5,2w (2) 4,5 =−z (1) 4,2 z(1)2,5w (2) 4,2 + z (1) 5,5w (2) 4,5 =−z (1) 4,5 ⇒  151 19 w (2) 4,2 +(− 14 19 )w (2) 4,5 = − 165 19 99 19 w (2) 4,2 +(− 39 19 )w (2) 4,5 = − 689 19 ⇒  w (2) 4,2 = 3211 4503 w(2)4,5 = 87704 4503 Thus, W (2) =   1 0 0 0 0 0 0 1 0 0 0 0 0 139844503 1 0 75563 4503 0 0 32114503 0 1 87704 4503 0 0 0 0 0 1 0 0 0 0 0 0 1   Step 4: We then proceed to update z1i, j to z 2 i, j by computing the entries as z(k)i, j = z (k−1) i, j + w (k) i,k z (k−1) k, j + w (k) i,n−k+1z (k−1) n−k+1, j ⇒ z (2) i, j = z(1)i, j + w (2) i,2 z (1) 2, j + w (2) i,5 z (1) 5, j. When i = 3 and j = 3,4, so z(2)3,3 = z (1) 3,3 + w (2) 3,2z (1) 2,3 + w (2) 3,5z (1) 5,3 = 172 19 +( 13984 4503 )( −443 38 )+( 75563 4503 )( −65 38 ) =− 9557475 171114 z(2)3,4 = z (1) 3,4 + w (2) 3,2z (1) 2,4 + w (2) 3,5z (1) 5,4 = 147 19 +( 13984 4503 )( 353 38 )+( 75563 4503 )( 97 38 ) = 13589845 171114 When i = 4 and j = 3,4, so z(2)4,3 = z (1) 4,3 + w (2) 4,2z (1) 2,3 + w (2) 4,5z (1) 5,3 = 819 38 +( 3211 4503 )( −443 38 )+( 87704 1171 )( −65 38 ) =− 3435276 171114 z(2)4,4 = z (1) 4,4 + w (2) 4,2z (1) 2,4 + w (2) 4,5z (1) 5,4 = 921 38 +( 3211 4503 )( 353 38 )+( 87704 4503 )( 97 38 ) = 13788034 171114 Z =   2 0 2 4 3 −1 0 15119 − 413 38 503 38 99 19 0 0 0 −9557475171114 13589845 171114 0 0 0 0 −3435276171114 13788034 171114 0 0 0 −1419 − 65 38 97 38 − 39 19 0 10 6 9 −13 10 14   . Thus, B = ( W (2) ·W (1) )−1 ·Z W H factorization of matrix B Step 1: We check the first and last row of matrix B before the initial update. b(0)1,1 = h (0) 1,1 = 2, b (0) 1,2 = h (0) 1,2 = 0, b (0) 1,3 = h (0) 1,3 = 2, b (0) 1,4 = h (0) 1,4 = 4, b(0)1,5 = h (0) 1,5 = 3, b (0) 1,6 = h (0) 1,6 =−1, b (0) 6,1 = h (0) 6,1 = 10, b (0) 6,2 = h(0)6,2 = 6, b (0) 6,3 = h (0) 6,3 = 9, b (0) 6,4 = h (0) 6,4 =−13, b (0) 6,5 = h (0) 6,5 = 10, b(0)6,6 = h (0) 6,6 = 14. Since h(0)1,2 = 0, then we interchange the first row with any other row except the last row. In this case we interchange first row with the fifth row such that the first and last row of the matrix has no zero entry as H(0) =   3 1 1 −1 1 4 5 10 −7 8 11 4 0 −12 9 6 18 1 −13 12 8 −20 14 17 2 0 2 4 3 −1 10 6 9 −13 10 14   . with H(0) having permutation matrix P(1) =   0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 1   . We begin to compute the set of 2×2 system of linear equations from h (k−1) k,k w (k) i,k + h (k−1) n−k+1,kw (k) i,n−k+1 =−h (k−1) i,k h(k−1)k,n−k+1w (k) i,k + h (k−1) n−k+1,n−k+1w (k) i,n−k+1 =−h (k−1) i,n−k+1. For k = 1,...,bn−12 c= 2. Now, let k = 1 then we have h (0) 1,1w (1) i,1 + h (0) n,1w (1) i,n =−h (0) i,1 h(0)1,nw (1) i,1 + h (0) n,nw (1) i,n =−h (0) i,n . Whenever i = 2, then 11 Bashir et al. / J. Nig. Soc. Phys. Sci. 5 (2023) 1112 12 h (0) 1,1w (1) 2,1 + h (0) 6,1w (1) 2,6 =−h (0) 2,1 h(0)1,6w (1) 2,1 + h (0) 6,6w (1) 2,6 =−h (0) 2,6 ⇒ 3w (1) 2,1 + 10w (1) 2,6 = −5 4w(1)2,1 + 14w (1) 2,6 = −4 ⇒  w (1) 2,1 = −15 w(1)2,6 = 4 Whenever i = 3, then  h (0) 1,1w (1) 3,1 + h (0) 6,1w (1) 3,6 =−h (0) 3,1 h(0)1,6w (1) 3,1 + h (0) 6,6w (1) 3,6 =−h (0) 3,6 ⇒ 3w (1) 3,1 + 10w (1) 3,6 = 0 4w(1)3,1 + 14w (1) 3,6 = −1 ⇒  w (1) 3,1 = 5 w(1)3,6 = − 3 2 Whenever i = 4, then  h (0) 1,1w (1) 4,1 + h (0) 6,1w (1) 4,6 =−h (0) 4,1 h(0)1,6w (1) 4,1 + h (0) 6,6w (1) 4,6 =−h (0) 4,6 ⇒ 3w (1) 4,1 + 10w (1) 4,6 = 13 4w(1)4,1 + 14w (1) 4,6 =−17 ⇒  w (1) 4,1 = 176 w(1)4,6 = − 103 2 Whenever i = 5, then  h (0) 1,1w (1) 5,1 + h (0) 6,1w (1) 5,6 =−h (0) 5,1 h(0)1,6w (1) 5,1 + h (0) 6,6w (1) 5,6 =−h (0) 5,6. ⇒ 3w (1) 5,1 + 10w (1) 5,6 = −2 4w(1)5,1 + 14w (1) 5,6 = 1. ⇒  w (1) 5,1 = −19 w(1)5,6 = 11 2 Therefore, we write the values of w(1)i,1 and w (1) i,n in a matrix form as W (1) =   1 0 0 0 0 0 −15 1 0 0 0 4 5 0 1 0 0 −32 176 0 0 1 0 −1032 −19 0 0 0 1 112 0 0 0 0 0 1   Step 2: We update h0i, j to h 1 i, j by computing its entries as h(k)i, j = h (k−1) i, j + w (k) i,k h (k−1) k, j + w (k) i,n−k+1h (k−1) n−k+1, j ⇒ h (1) i, j = h(0)i, j + w (1) i,1 h (0) 1, j + w (1) i,6 h (0) 6, j. When i = 2 and j = 2,3,4,5, so h(1)2,2 = h (0) 2,2 +w (1) 2,1h (0) 1,2 +w (1) 2,6h (0) 6,2 = 10 +(−15)(1)+(4)(6) = 19 h(1)2,3 = h (0) 2,3 +w (1) 2,1h (0) 1,3 +w (1) 2,6h (0) 6,3 =−7+(−15)(1)+(4)(9) = 14 h(1)2,4 = h (0) 2,4 + w (1) 2,1h (0) 1,4 + w (1) 2,6h (0) 6,4 = 8 +(−15)(−1)+(4)(−13) = 29 h(1)2,5 = h (0) 2,5 + w (1) 2,1h (0) 1,5 + w (1) 2,6h (0) 6,5 = 11 +(−15)(1)+(4)(10) = 36 When i = 3 and j = 2,3,4,5, then h(1)2,2 = h (0) 3,2 + w (1) 3,1h (0) 1,2 + w (1) 3,6h (0) 6,2 =−12 +(5)(1)+(− 3 2 )(6) = −16 h(1)3,3 = h (0) 3,3 + w (1) 3,1h (0) 1,3 + w (1) 3,6h (0) 6,3 = 9 +(5)(1)+(− 3 2 )(9) = 1 2 h(1)3,4 = h (0) 3,4 + w (1) 3,1h (0) 1,4 + w (1) 3,6h (0) 6,4 = 6 +(5)(−1)+(−32 )(−13) = 41 2 h(1)3,5 = h (0) 3,5 + w (1) 3,1h (0) 1,5 + w (1) 3,6h (0) 6,5 = 18 +(5)(1)+(− 3 2 )(10) = 8 When i = 4 and j = 2,3,4,5, we have h(1)4,2 = h (0) 4,2 + w (1) 4,1h (0) 1,2 + w (1) 4,6h (0) 6,2 = 12 +(176)(1)+(−1032 )(6) =−121 h(1)4,3 = h (0) 4,3 + w (1) 4,1h (0) 1,3 + w (1) 4,6h (0) 6,3 = 8 +(176)(1)+(− 103 2 )(9) = −5592 h(1)4,4 = h (0) 4,4 + w (1) 4,1h (0) 1,4 + w (1) 4,6h (0) 6,4 = −20 +(176)(−1)+(−1032 )(−13) = 1027 2 h(1)4,5 = h (0) 4,5 + w (1) 4,1h (0) 1,5 + w (1) 4,6h (0) 6,5 = 14 +(176)(1)+(−1032 )(10) =−325 When i = 5 and j = 2,3,4,5, then h(1)5,2 = h (0) 5,2 + w (1) 5,1h (0) 1,2 + w (1) 5,6h (0) 6,2 = 0 +(19)(1)+( 11 2 )(6) = 52 h(1)5,3 = h (0) 5,3 + w (1) 5,1h (0) 1,3 + w (1) 5,6h (0) 6,3 = 2 +(19)(1)+( 11 2 )(9) = 141 2 h(1)5,4 = h (0) 5,4 + w (1) 5,1h (0) 1,4 + w (1) 5,6h (0) 6,4 = 4 +(19)(−1)+( 112 )(−13) =− 173 2 h(1)5,5 = h (0) 5,5 + w (1) 5,1h (0) 1,5 + w (1) 5,6h (0) 6,5 = 3 +(19)(1)+( 11 2 )(10) = 77 In H(1) the entries h(1)2, j and h (1) 5, j are nonzero (i.e. h (1) 2,2 = 19, h (1) 2,3 = 14, h(1)2,4 =−29, h (1) 2,5 = 36, h (1) 5,2 = 52, h (1) 5,3 = 141 2 , h (1) 5,4 =− 173 2 , h (1) 5,5 = 77) for j = 2,3,4,5. Otherwise apply suitable row interchange in H(0) and re-factorize, else the factorization breakdown. Step 3: Now, we can compute the next set of 2×2 systems of linear equation from the entries in h1i, j . Let k = 2, then h (1) 2,2w (2) i,2 + h (1) n−1,2w (2) i,n−1 =−h (1) i,2 h(1)2,n−1w (2) i,2 + h (1) n−1,n−1w (2) i,n−1 =−h (1) i,n−1. Whenever i = 3, then 12 Bashir et al. / J. Nig. Soc. Phys. Sci. 5 (2023) 1112 13 h (1) 2,2w (2) 3,2 + h (1) 5,2w (2) 3,5 =−h (1) 3,2 h(1)2,5w (2) 3,2 + h (1) 5,5w (2) 3,5 =−h (1) 3,5 ⇒ 19w (2) 3,2 + 52w (2) 3,5 = 16 36w(2)3,2 + 77w (2) 3,5 = 3−8 ⇒  w (2) 3,2 = − 1648 409 w(2)3,5 = 728 409 Whenever i = 4, then h (1) 2,2w (2) 4,2 + h (1) 5,2w (2) 4,5 =−h (1) 4,2 h(1)2,5w (2) 4,2 + h (1) 5,5w (2) 4,5 =−h (1) 4,5 ⇒ 19w (2) 4,2 + 52w (2) 4,5 = 121 36w(2)4,2 + 77w (2) 4,5 = 325 ⇒  w (2) 4,2 = 7583 409 w(2)4,5 = − 1819 409 Thus, W∗(2) =   1 0 0 0 0 0 0 1 0 0 0 0 0 −1648409 1 0 728409 0 0 7583409 0 1 − 1819 409 0 0 0 0 0 1 0 0 0 0 0 0 1   Step 4: We then proceed to update the matrix H(1) to H(2) by computing its entries as h(k)i, j = h (k−1) i, j + w (k) i,k h (k−1) k, j + w (k) i,n−k+1h (k−1) n−k+1, j ⇒ h (2) i, j = h(1)i, j + w (2) i,2 h (1) 2, j + w (2) i,5 h (1) 5, j. When i = 3 and j = 3,4, so h(2)3,3 = h (1) 3,3 + w (2) 3,2h (1) 2,3 + w (2) 3,5h (1) 5,3 = 1 2 +(− 1648 409 )(14)+( 728 409 )( 141 2 ) = 56913 818 h(2)3,4 = h (1) 3,4 + w (2) 3,2h (1) 2,4 + w (2) 3,5h (1) 5,4 = 41 2 +(− 1648 409 )(−29)+( 728 409 )(− 173 2 ) =− 13591 818 When i = 4 and j = 3,4, so h(2)4,3 = h (1) 4,3 + w (2) 4,2h (1) 2,3 + w (2) 4,5h (1) 5,3 = −5592 +( 7583 409 )(14)+(− −1819 409 )( 141 2 ) =− 272786 818 h(2)4,4 = h (1) 4,4 + w (2) 4,2h (1) 2,4 + w (2) 4,5h (1) 5,4 = 1027 2 +( 7583 409 )(−29)+(− −1819 409 )(− 173 2 ) = 294916 818 H =   3 1 1 −1 1 4 0 19 14 −29 36 0 0 0 56913818 − 13591 818 0 0 0 0 −272786818 294916 818 0 0 0 52 1412 − 173 2 77 0 10 6 9 −13 10 14   . The factorization stops since the entries h(2)3, j and h (2) 4, j are nonzero, for j = 3,4. To get the matrix B, we express B as B = ( W (2) ·W (1) ·P(1) )−1 ·H 4. Investigation for further studies on W Z and W H factor- ization There are bridging gaps to be uncovered not only between W Z factorization and W H factorization but also within Z-matrix and hourglass matrix. Further research could be carried on the following: 1. If there exists W H factorization for a nonsingular matrix B, then there exists W Z factorization. 2. W Z factorization does not necessarily imply W H factor- ization. 3. Zsystem is a matrix group of degree 2 over R but Hsystem is not. 4. If both Hsystem and H1,1 are invertible then ( H2,2 −H2,1H−11,1 H1,2 ) is a lower triangular invertible matrix. 5. If there exists W H factorization for a nonsingular matrix B, then the factorization is unique. 6. Let EM(G ) be the energy of mixed hourglass graph G and Ak be the number of arcs in G . Then, does there exist a mixed hourglass graph G of order n (n > 4) satisfying the following? Ak = EM(G ). Acknowledgments This review paper is funded by FRGS (Fundamental Re- search Grant Scheme), Grant reference number 1/2020/STG06/USM/01/1. References [1] F. Bornemann, Matrix Factorization, Springer (2018). [2] J. R. Bunch & J. E. Hopcroft, “Triangular factorization and inversion by fast matrix multiplication”, Mathematics of Computation 28 (1974) 231. [3] K. Murota, “LU-decomposition of a matrix with entries of different kinds”, Linear Algebra and its Applications 49 (1983) 275. [4] A. Townsend & L. N. Trefethen, “Continuous analogues of matrix factor- izations”, in Proceedings of Rotal Society A 471, (2015) 585. [5] X. Wang, P. H. Jones & J. Zambreno, “A configurable architecture for sparse lu decomposition on matrices with arbitrary patterns”, ACM SIGARCH Computer Architecture News 43 (2016) 76. [6] J. Dongarra, M. Faverge, H. Ltaief & P. Luszczek, “Achieving numeri- cal accuracy and high performance using recursive tile LU factorization with partial pivoting”, Concurrency and Computation: Practice and Ex- perience 26 (2014) 1408. [7] D. J. Evans & M. Hatzopoulos, “The parallel calculation of the eigen- values of a real matrix”, Computers and Mathematics with Appli- cations, 4 (1978) 211. http://dx.doi.org/http://dx.doi.org/10.1016/0898- 1221(78)90032-9 [8] S. Rao, “Existence and uniqueness of wz factorization”, Parallel Comput. 23 (1997) 1129. [9] B. Bylina & J. Bylina, “Incomplete wz factorization as an alternative method of preconditioning for solving markov chains”, International Con- ference on Parallel Processing and Applied Mathematics, Springer (2017) 99. [10] D. Evans & M. Hatzopoulos, “A parallel linear system solver”, Int. J. Comput. Math. 7 (1979) 227. [11] D. Evans, “The QIF singular value decomposition method”, Int. J. Com- put. Math. 79 (2002) 637. 13 Bashir et al. / J. Nig. Soc. Phys. Sci. 5 (2023) 1112 14 [12] P. Huang, A. MacKay & D. Teng, “A hardware/software codesign of wz factorization to improve time to solve matrices”, Canadian Conference on Electrical and Computer Engineering, IEEE, (2010) 1. [13] B. Bylina & J. Bylina, ”The parallel tiled WZ factorization algorithm for multicore architectures”, Int. J. Appl. Math. Comput. Sci. 29(2019) 407. [14] B. Bylina, ”Solving linear systems with vectorized WZ factorization”, Annales UMCS Informatica 1 (2003) 1. [15] G. Heinig & K. Rost, “Schur-type algorithms for the solution of Hermi- tian Toeplitz systems via factorization”, Springer (2005) 233. [16] B. Bylina & J. Bylina, “Mixed precision iterative refinement techniques for the wz factorization”, IEEE Federated Conference on Computer Sci- ence and Information Systems (2013) 425. [17] O. Babarinsa, Z. M. S. Azfi, A. H. I. Mohd & K. Hailiza, “Optimized cramer’s rule in wz factorization and applications”, European Journal of Pure and Applied Mathematics 13 (2020) 1035. [18] M. Brunetti & A. Renato, “Old and new proofs of cramer’s rule”, Appl. Math. Sci. 8 (2014) 6689. [19] B. Bylina & J. Bylina, “The WZ factorization in matlab”, IEEE Federated Conference on Computer Science and Information Systems (2014) 561. [20] D. Levin & D. Evans, “The inversion of matrices by the double-bordering algorithm on mimd computers”, Parallel Comput. 17 (1991) 591. [21] S. C. S. Rao & R. Kamra, “A stable parallel algorithm for diagonally dominant tridiagonal linear systems”, 22nd International Conference on High Performance Computing (2015) 95. [22] M. Kaps & M. Schlegl, “A short proof for the existence of the WZ fac- torisation”, Parallel Comput. 4 (1987) 229. [23] E. Golpar-Raboky, “A new approach for computing WZ factorization”, Appl. Appl. Math. 7 (2012) 571. [24] P. Yalamov & D. Evans, “The WZ matrix factorisation method”, Parallel Computing 21 (1995) 1111. [25] B. Bylina, “The inverse iteration with theWZ factorization used to the markovian models”, Annales UMCS Informatica AI 2 (2015) 15. [26] B. Bylina, “The block WZ factorization”, J. Comput. Appl. Math. 331 (2018) 119. [27] B. Bylina & J. Bylina, “Influence of preconditioning and blocking on accuracy in solving markovian models”, Int. J. Appl. Math. Comput. Sci. 19 (2009) 207. [28] J. Bylina & B. Bylina, “Parallelizing nested loops on the intel xeon phi on the example of the dense wz factorization”, 2016 IEEE Federated Con- ference on Computer Science and Information Systems (2016) 655. [29] B. Bylina & J. Bylina, “GPU-accelerated wz factorization with the use of the cublas library”, IEEE Federated Conference on Computer Science and Information System (2012) 509. [30] D. Evans & G. Oksa, “Parallel solution of symmetric positive definite toeplitz systems”, Parallel Algorithms Appl. 12 (1997) 297. [31] O. Efremides, M. Bekakos & D. Evans, “Implementation of the general- ized WZ factorization on a wavefront array processor”, Int. J. Comput. Math. 79 (2002) 807. [32] K. Rhofi, M. Ameur & A. Radid, “Double power method iteration for parallel eigenvalue problem”, Int. J. Pure Appl. Math. 108 (2016) 945. [33] B. Bylina, J. Bylina & M. Piekarz, “Influence of loop transformations on performance and energy consumption of the multithreded WZ factor- ization”, 17th IEEE Conference on Computer Science and Intelligence Systems (2022) 479. [34] A. Benaini & D. Laiymani, “Generalized WZ factorization on a reconfig- urable machine”, Parallel Algorithms Appl. 3 (1994) 261. [35] D. Evans & R. Abdullah, “The parallel implicit elimination (pie) method for the solution of linear systems”, Parallel Algorithms Appl. 1 (1994) 153. [36] E. Golpar-Raboky & E. Babolian, “On the WZ factorization of the real and integer matrices”, Iranian Journal of Mathematical Sciences and In- formatics 17 (2022) 71. [37] D. J. Tylavsky, “Quadrant interlocking factorization: a form of block LU factorization”, IEEE Proceedings (1986) 232. [38] I. Garcia, J. Merelo, J. D. Bruguera & E. L. “Zapata, Parallel quadrant interlocking factorization on hypercube computers”, Parallel computing 15 (1990) 87. [39] R. Asenjo & M. Ujaldon & E. Zapata, “Parallel WZ factorization on mesh multiprocessors”, Microprocessing and Microprogramming 38 (1993) 319. [40] G. Heinig & K. Rost, “Fast algorithms for toeplitz and hankel matrices”, Linear Algebra Appl. 435 (2011) 1. [41] B. Bylina & J. Bylina, “Analysis and comparison of reordering for two factorization methods (LU and WZ) for sparse matrices”, International Conference on Computational Science (2008) 983. [42] D. Ahmed & N. Askar, “Parallelize and analysis LU factorization and quadrant interlocking factorization algorithm in openmp”, Journal of Duhok University (2018) 46. [43] B. Bylina & J. Bylina, “OpenMP thread affinity for matrix factorization on multicore systems”, IEEE Federated Conference on Computer Science and Information Systems (2017) 489. [44] C. Demeure, “Bowtie factors of toeplitz matrices by means of split algo- rithms”, IEEE Transactions on Acoustics, Speech, and Signal Processing 37 (1989) 1601. [45] O. Babarinsa & H. Kamarulhaili, “Quadrant interlocking factorization of hourglass matrix”, AIP Conference Proceedings 030009, 2018, 1. [46] O. Babarinsa, M. Arif & H. Kamarulhaili, “Potential applications of hour- glass matrix and its quadrant interlocking factorization”, ASM Science Journal 12 (2019) 72. [47] O. Babarinsa, O. Ihinkalu, V. Cyril-Okeme, H. Kamarulhaili, A. Man- dangan, A. Z. M. Sofi & A. B. Disu, “Application of hourglass matrix in goldreich-goldwasser-halevi encryption scheme”, Journal of the Nigerian Society of Physical Sciences (2022) 874. [48] O. Babarinsa, H. Kamarulhaili, “Mixed hourglass graph”, AIP Confer- ence Proceedings 2184 (2019). [49] S. Arumugam, A. Brandstadt, T. Nishizeki & K. Thulasiraman, Handbook of graph theory, combinatorial optimization, and algorithms, Chapman and Hall/CRC (2016). [50] O. Babarinsa, “Graph theory: A lost component for development in Nige- ria”, Journal of the Nigerian Society of Physical Sciences 4 (2022) 1. http://dx.doi.org/DOI:10.46481/jnsps.2022.844 [51] K. Guo & B. Mohar, “Hermitian adjacency matrix of digraphs and mixed graphs”, Journal of Graph Theory 85 (2015) 217. [52] R. Ponraj, S. S. Narayanan & A. Ramasamy, “Total mean cordial- ity of umbrella, butterfly and dumbbell graphs”, Jordan J. Math. and Stat.(JJMS) 8 (2015) 59. [53] S. Alikhani, J. I. Brown & S. Jahari, “On the domination polynomials of friendship graphs”, Filomat 30 (2016) 169. [54] M. Liu, Y. Zhu, H. Shan & K. C. Das, “The spectral characterization of butterfly-like graphs”, Linear Algebra and its Applications 513 (2017) 55. [55] O. Babarinsa & H. Kamarulhaili, Mixed energy of a mixed hourglass graph, Communications in Mathematics and Applications 10 (2019) 45. http://dx.doi.org/10.26713/cma.v10i1.1143 14