� EXTRACTA MATHEMATICAE Volumen 33, Número 2, 2018 instituto de investigación de matemáticas de la universidad de extremadura EXTRACTA MATHEMATICAE Vol. 37, Num. 2 (2022), 223 – 242 doi:10.17398/2605-5686.37.2.223 Available online April 1, 2022 Dynamics of products of nonnegative matrices S. Jayaraman, Y.K. Prajapaty, S. Sridharan Indian Institute of Science Education and Research Thiruvananthapuram (IISER-TVM), India sachindranathj@iisertvm.ac.in , sachindranathj@gmail.com prajapaty0916@iisertvm.ac.in , shrihari@iisertvm.ac.in Received January 5, 2022 Presented by J. Torregrosa Accepted March 4, 2022 Abstract: The aim of this manuscript is to understand the dynamics of products of nonnegative matrices. We extend a well known consequence of the Perron-Frobenius theorem on the periodic points of a nonnegative matrix to products of finitely many nonnegative matrices associated to a word and later to products of nonnegative matrices associated to a word, possibly of infinite length. Key words: Products of nonnegative matrices, common eigenvectors, common periodic points, orbits of infinite matrix products. MSC (2020): 15A27, 37H12, 15B48. 1. Introduction Given a family F of functions on a set Ω, an element w0 ∈ Ω is said to be a common fixed point for F if f(w0) = w0 for all f ∈ F. The existence and computation of such a point has been a topic of interest among several mathematicians, for instance see [2, 11]. Of particular interest is when the collection is a multiplicative semigroup or a group M of matrices, where a more general question on the existence of common eigenvectors arises. A classic example of a multiplicative semigroup of matrices is the collection of matrices whose entries are nonnegative real numbers. In a recent work, Bernik et al. [3] determined certain conditions that ensures the existence of a common fixed point and more generally the existence of a common eigenvector for such a collection M. The existence of common eigenvectors for a collection of matrices is in itself a nontrivial question and plays a major role in many problems in matrix analysis. For recent results on periods and periodic points of iterations of sub-homogeneous maps on a proper polyhedral cone, we refer the reader to [1] (for instance, see Theorem 4.2) and the references cited therein. ISSN: 0213-8743 (print), 2605-5686 (online) © The author(s) - Released under a Creative Commons Attribution License (CC BY-NC 3.0) https://doi.org/10.17398/2605-5686.37.2.223 mailto:sachindranathj@iisertvm.ac.in mailto:sachindranathj@gmail.com mailto:prajapaty0916@iisertvm.ac.in mailto:shrihari@iisertvm.ac.in https://publicaciones.unex.es/index.php/EM https://creativecommons.org/licenses/by-nc/3.0/ 224 s. jayaraman, y.k. prajapaty, s. sridharan We work throughout with the field R of real numbers. Let Mn(R) denote the real vector space of n × n matrices. The subset of Mn(R) consisting of matrices whose entries are nonnegative real numbers (such a matrix is usually called a nonnegative matrix ) is denoted by Mn(R+). For any matrix A ∈ Mn(R), we denote and define the spectrum, the spectral radius and the norm of A respectively, as follows: spec(A) = the set of all eigenvalues of A, some of which may be complex numbers; ρ(A) = max { |λ| : λ ∈ spec(A) } ; ‖A‖ = the operator norm of A, induced by the Euclidean norm of Rn. For any N < ∞, we fix a finite collection of matrices, { A1, A2, . . . , AN : Ar ∈ Mn(R+) } and define the following discrete dynamical system: for x0 ∈ Rn, define xj+1 := Aωjxj , for ωj ∈ { 1, 2, . . . ,N } . (1.1) That is, from a point xj at time t = j, we arrive at the point xj+1 at time t = j + 1 in the iteration of any generic point in Rn, by randomly choosing one of the matrices from the above mentioned finite collection and the action by the chosen matrix. Observe that in order to achieve proper meaning to the above mentioned iterative scheme, one expects to understand nonhomo- geneous products of matrices. Recall that given a self map f on a topological space X, an element x ∈ X is called a periodic point of f if there exists a positive integer q such that fq(x) = x. In such a case, the smallest such integer q that satisfies fq(x) = x is called the period of the periodic point x. The starting point of this work is the following consequence of the Perron-Frobenius theorem, as can be found in [8, Theorem B.4.7]. Theorem 1.1. Let A ∈ Mn(R+) with ρ(A) ≤ 1. Then, there exists a positive integer q such that for every x ∈ Rn with (∥∥Akx∥∥) k∈N bounded, we have lim k→∞ Akqx = ξx , where ξx is a periodic point of A whose period divides q. The spectral radius condition in Theorem 1.1 can be dispensed with, only at the cost of looking at the orbits of points in the positive cone Rn+. An illustration to this effect, can be found in [2, page 321]. dynamics of matrix products 225 We are interested in a generalization of Theorem 1.1, when the matrix A in the above theorem is replaced by a product of the matrices Ar’s, possi- bly an infinite one, drawn from the finite collection of nonnegative matrices,{ A1, . . . ,AN } . Besides generalizing Theorem 1.1 as described above, we also bring out the existence of common periodic points for the said collection of matrices. This manuscript is organized as follows: In Section 2, we introduce basic notations, however only as much necessary to state the main results of this paper, namely Theorem 2.1 and Theorem 2.3. In Section 3, we familiarise the readers with some results from the literature, on adequate conditions to impose on a collection of matrices that ensures the existence of common eigen- vectors. In Section 4, we prove Theorem 2.1 and highlight a special case of the theorem as Corollary 4.2, when the collection of matrices satisfy an additional hypothesis. We follow this with Section 5 where we write a few examples, that illustrate the theorems. In Section 6, we focus on words of infinite length based on the finite collection of matrices that we have considered so far and write the proof of Theorem 2.3. 2. Main results In this section, we introduce some notations, explain the underlying set- tings of the main results and state our main results of this paper. As ex- plained in the introductory section, we fix a finite set of nonnegative matrices{ A1, . . . ,AN } , N < ∞. For any finite M ∈ N and p ∈ N, we denote the set of all p-lettered words on the set of first M positive integers by Σ p M := { ω = (ω1ω2 · · ·ωp) : ωr ∈ { 1, . . . ,M }} . For any p-lettered word ω := (ω1ω2 · · ·ωp) ∈ Σ p N , we define the (finite) matrix product Aω := Aωp ×Aωp−1 ×···×Aω2 ×Aω1. (2.1) A key hypothesis in our first theorem assumes the existence of a nontrivial set of common eigenvectors, say E = { v1,v2, . . . ,vd } for the given collection of matrices, { A1, . . . ,AN } . These common eigenvectors may be vectors in Rn or Cn. A sufficient condition that ensures the existence of common eigenvectors for the given collection is to demand the collection to be partially commuting, quasi-commuting or a Laffey pair when N = 2, or the collection to be quasi- 226 s. jayaraman, y.k. prajapaty, s. sridharan commuting when N ≥ 3. Each of these terms is explained in Section 3. Define LC(E) = { α1v1 + · · · + αdvd : αj ∈ C satisfying αs1 = αs2 for all vs1 = vs2 and αj ∈ R otherwise } . (2.2) We now state our first result in this article. Theorem 2.1. Let { A1,A2, . . . ,AN } , N < ∞, be a collection of n × n matrices with nonnegative entries, each having spectral radius 1. Assume that the collection satisfies at least one of the following conditions, that ensures the existence of a nontrivial set of common eigenvectors. 1. If N = 2, then the collection is either partially commuting, quasi- commuting or a Laffey pair. 2. If N ≥ 3, then the collection is quasi-commuting. Let E denote the set of all common eigenvectors of the collection of matrices. For any finite p, let ω ∈ ΣpN and Aω be the matrix associated to the word ω. Then, for any vector x ∈LC(E), there exists an integer qω ≥ 1 such that lim k→∞ Akqωω x = ξ(x,ω) , (2.3) where ξ(x,ω) is a periodic point of Aω, whose period divides qω. Moreover, when p ≥ N and ω is such that for all 1 ≤ r ≤ N, there exists 1 ≤ j ≤ p such that ωj = r, the integer qω and the limiting point ξ(x,ω) ∈ Rn are independent of the choice of ω. Careful readers may have already observed that, subject to the spectral radius condition as found in the hypothesis of Theorem 2.1, we have LC(E) ⊆{ x ∈ Rn : sup ∥∥Akωx∥∥ < ∞}. However, since the spectral radius of Aω can not be determined in general, we are forced to only work with vectors in LC(E). Nevertheless, when the collection of matrices satisfy the spectral radius condition and are simultaneously diagonalizable, the above two sets coincide and is equal to Rn. We will look at examples of this kind, later in Section 5. We now illustrate the case when the above set inclusion is proper. Consider the following pair of non-commuting, diagonalizable nonnegative matrices both having spectral radius 1, with one common eigenvector, namely e1. A1 = 1 3  3 0 00 1 2 0 2 1   and A2 = 1 3  3 0 00 1 4 0 1 1   . dynamics of matrix products 227 Consider the two-lettered word ω = 12 ∈ Σ22. Then, the spectrum of Aω, given by { 2+ √ 3 3 , 1 , 2− √ 3 3 } has corresponding eigenvectors u = ( 0, 1 + √ 3,−1 ) , e1 , v = ( 0, 2 − √ 3,−1 ) that form a basis for R3. Since the largest eigenvalue is larger than unity, we observe that the sequence ( ‖Akωu‖ ) k∈N is unbounded. Thus, limk→∞ Akωu does not exist. Further, in this case, we note that LC(E) = {α1e1 : α ∈ R} ( { x ∈ R3 : sup k ∥∥Akωx∥∥ < ∞} = {α1e1 + α2v : α1,α2 ∈ R} . We now denote the interior of the nonnegative orthant of Rn by (Rn+) ◦, a convex cone and define the logarithm map and the exponential map, that ap- pear frequently in nonlinear Perron-Frobenius theory as follows: log : (Rn+) ◦ → Rn and exp : Rn → (Rn+)◦ by log(x) = (log x1, . . . , log xn) and exp(x) = (e x1, . . . ,exn) . As one may expect, these functions act as inverses of each other in the interior of Rn+. More on these functions and their uses in nonlinear Perron- Frobenius theory can be found in the monograph [8]. A nonnegative matrix, when viewed as a linear map on Rn, preserves the partial order induced by Rn+. A map f defined on a cone in R n is said to be subhomogeneous if for every λ ∈ [0, 1], we have λf(x) ≤ f(λx) for every x in the cone and homogeneous if f(λx) = λf(x) for every nonnegative λ and every x in the cone. It is then easy to verify that the function f := exp◦A◦ log is a well-defined subhomogeneous map on (Rn+) ◦. We now state a corollary to Theorem 2.1 for an appropriate subhomogeneous map, fω. Corollary 2.2. Let { A1, . . . ,AN } , N < ∞ be a set of n × n matrices satisfying all the hypotheses in Theorem 2.1. For any finite p, let ω ∈ ΣpN and Aω be the matrix associated with the word ω. Consider the function fω : (Rn+) ◦ −→ (Rn+)◦ given by fω = exp◦Aω ◦ log. Then, for any y = ex ∈ (Rn+) ◦ where x ∈LC(E), there exists an integer q ≥ 1 such that lim k→∞ fkqω y = ηy , (2.4) where ηy is a periodic point of fω, whose period divides q. 228 s. jayaraman, y.k. prajapaty, s. sridharan Our final theorem in this paper concerns the orbit of some x ∈ Rn under the action of some infinitely long word, whose letters belong to { A1, . . . ,AN } . In order to make our lives simpler, we shall assume that the given collection of matrices are pairwise commuting, with each matrix being diagonalizable over C. This ensures the existence of n linearly independent (over C) common eigenvectors E = { v1, . . . ,vn } for the given collection. Let the first κ of these common eigenvectors correspond to eigenvalues of modulus 1 for every matrix Ar, in the collection. Observe, in this case that LC(E) = Rn. Further, for any vector x ∈ Rn given by x = ∑n s=1 αsvs obeying the conditions mentioned in Equation (2.2), we define the support of the vector x as I(x) = supp(x) = { 1 ≤ s ≤ n : αs 6= 0 } . We now give a brief overview of the space of infinite-lettered words on finitely many letters. According the discrete metric on the set of letters{ 1, 2, . . . ,N } using the Kronecker delta function, one can topologize Σ p N with the appropriate product metric. When p = ∞, notice that the basis for the topology on the space of infinite-lettered words on N symbols, namely Σ∞N , is given by the cylinder sets that fixes the set of initial finite coordinates, i.e., given any ω ∈ ΣpN for some p ∈ Z +, the corresponding cylinder set is given by[ ω1ω2 · · ·ωp ] = { τ ∈ Σ∞N : τj = ωj for 1 ≤ j ≤ p } . For more details on the spaces Σ p N or Σ ∞ N , one may refer [6]. For any p-lettered word ω ∈ ΣpN , we denote by ω, the infinite-lettered word obtained by concatenating ω with itself, infinitely many times, i.e., ω = (ω ω · · ·). Under the topology defined on Σ∞N , one may observe that Σ∞N = ⋃ p≥1 { ω : ω ∈ ΣpN } . We know, from Theorem 2.1, that upon satisfying the necessary technical con- ditions, lim k→∞ A kq ω x = ξx, whenever x ∈ LC(E). Thus, the following definition makes sense. Let Ãω := Aω = (A q ω) k as k →∞ . However, since Theorem 2.1 only asserts ξx to be a periodic point whose period divides q, we shall consider the map Ãω : Rn → (Rn)q. The precise action of Ãω on points in Rn is given by Ãω(x) = ( ξx,Aωξx, . . . ,A q−1 ω ξx ) . (2.5) dynamics of matrix products 229 Let τ = ( τ1 τ2 τ3 · · · ) ∈ Σ∞N be any arbitrary infinite lettered word that encounters all the N letters within a finite time, say m. It is easy to observe that the sequence( (τ1 · · · τm), (τ1 · · · τm+1), · · · ) converges to τ, in the topology on Σ∞N , as described above. For any p ≥ m, denote by τ [p], the infinite-lettered word (τ1 · · · τp) that occurs in the sequence, written above that converges to any given τ ∈ Σ∞N . Moreover, from the discussion above, we have that Ãτ[p]x = ( ξx,Aτ[p]ξx, . . . ,A q−1 τ[p] ξx ) . Notice that the first component of the vector in (Rn)q is always ξx for all p ≥ m. Further, we define for every r ∈ { 1, . . . ,N } , Φ(τ,r)(p) = # of Ar in Aτ[p]. We now state our final theorem in this paper. Theorem 2.3. Let { A1, . . . ,AN } , N < ∞, be a collection of n×n simul- taneously diagonalizable nonnegative matrices each having spectral radius at most 1. Suppose τ ∈ Σ∞N encounters all the N letters within a finite time, say m. Then, for any x ∈ Rn, there exists an increasing sequence {pγ}γ≥1 (depending on x), of positive integers and a finite collection of positive integers{ Λ(r,s) } for 1 ≤ r ≤ N and 1 ≤ s ≤ κ such that N∑ r=1 Λ(r,s) [ Φ(τ,r) ( pγk ) − Φ(τ,r) ( pγk′ )] ≡ 0 (mod q), for all s ∈ I(x) ∩{1, . . . ,κ}, where pγk and pγk′ are any two integers from the sequence {pγ}. The above result may appear to be explaining an arithmetic property in a paper that deals with random dynamical systems generated by finitely many matrices; however, the authors urge the readers to note the following. As explained earlier, we know that Ãτ[p] : R n → (Rn)q. Observe that lim p→∞ Ãτ[p]x does not necessarily exist. However, from the proof of Theorem 2.3, we will obtain the following corollary: 230 s. jayaraman, y.k. prajapaty, s. sridharan Corollary 2.4. For each x ∈ Rn, there exists an increasing sequence {pγ}γ≥1 (depending on x) such that { Ã τ[pγ] x } γ≥1 is a constant sequence and therefore, lim γ→∞ Ã τ[pγ] x exists. 3. Common eigenvectors for a collection of matrices A key ingredient in our main results in this work is the existence of a nontrivial set of common eigenvectors for a given collection { A1, . . . ,AN } of matrices. It is a well known result that if every matrix in the collection is diag- onalizable over C with the collection commuting pairwise, there is a common similarity matrix that puts all the matrices in a diagonal form. A collection of non-commuting matrices may or may not have common eigenvectors. The question as to which collections of matrices possess common eigenvectors is extremely nontrivial. In what follows, we give a brief account of this question that is essential for this work. We begin with the following definition. Definition 3.1. A collection { A1, . . . ,AN } of matrices is said to be quasi- commuting if for each pair (r,r′) of indices, both Ar and Ar′ commute with their (additive) commutator [Ar,Ar′] := ArAr′ −Ar′Ar. A classical result of McCoy [5, Theorem 2.4.8.7] says the following: Theorem 3.2. Let { A1, . . . ,AN } be a collection of n×n matrices. The following statements are equivalent. 1. For every polynomial p(t1, . . . , tN ) in N non-commuting variables t1, . . . , tN and every r,r ′ = 1, . . . ,N, p(A1, . . . ,AN )[Ar,Ar′] is nilpotent. 2. There is a unitary matrix U such that U∗ArU is upper triangular for every r = 1, . . . ,N. 3. There is an ordering λ (r) 1 , . . . ,λ (r) n of the eigenvalues of each of the matrices Ar, 1 ≤ r ≤ N such that for any polynomial p(t1, . . . , tN ) in N non-commuting variables, the eigenvalues of p(A1, . . . ,AN ) are p ( λ (1) s , . . . ,λ (N) s ) , s = 1, . . . ,n. If the matrices and the polynomials are over the real field, then all calcu- lations may be carried out over R, provided all the matrices have eigenvalues in R. It turns out that a sufficient condition that guarantees any of the above three statements is when the collection of matrices is quasi-commutative (see dynamics of matrix products 231 Drazin et al. [4]). Moreover, the first statement implies that the collection{ A1, . . . ,AN } has common eigenvectors. There are also other classes of ma- trices which possess common eigenvectors. A pair (A1,A2) of matrices is said to partially commute if they have com- mon eigenvectors. Moreover, two matrices A1 and A2 partially commute iff the Shemesh subspace N = n−1⋂ k,l=1 ker ([ Ak1,A l 2 ]) is a nontrivial maximal in- variant subspace of A1 and A2 over which both A1 and A2 commute (see Shemesh [10]). The number of linearly independent common eigenvectors of the pair cannot exceed the dimension of N . A pair (A1,A2) of matrices is called a Laffey pair if rank ([A1,A2]) = 1. It can be shown that such a pair of matrices partially commute, but do not commute. 4. Proof of Theorem 2.1 In this section, we prove Theorem 2.1, after stating a theorem due to Frobenius. Suppose A is an irreducible matrix in Mn(R+) such that there are exactly κ eigenvalues of modulus ρ(A). This integer κ is called the index of imprimitivity of A. If κ = 1, the matrix A is said to be primitive. If κ > 1, the matrix is said to be imprimitive. Theorem 4.1. ([12, Theorem 6.18]) Let A be an irreducible nonnegative matrix with its index of imprimitivity equal to κ. If λ1, . . . ,λκ are the eigen- values of A of modulus ρ(A), then λ1, . . . ,λκ are the distinct κ-th roots of [ρ(A)]κ. Proof of Theorem 2.1. Recall that E = { v1, . . . ,vd } is a set of d common eigenvectors of the matrices A1, . . . ,AN that satisfies Arvs = λ(r,s)vs, where λ(r,s) is an eigenvalue of the matrix Ar corresponding to the eigenvector vs, 1 ≤ s ≤ d. Observe that for any p-lettered word ω = (ω1 · · ·ωp), we have Aωvs = λ(ωp,s) · · ·λ(ω1,s)vs = λ(ω,s)vs , where λ(ω,s) = λ(ωp,s) · · ·λ(ω1,s). We now rearrange the common eigenvectors { v1, . . . ,vd } as{ v1, . . . ,vκ,vκ+1, . . . ,vd } , where κ is defined as κ = # { vs : Arvs = λ(r,s)vs with ∣∣λ(r,s)∣∣ = 1 for all 1 ≤ r ≤ N} . (4.1) 232 s. jayaraman, y.k. prajapaty, s. sridharan It is possible that κ = 0, in which case, the limiting vector is the zero vector (as you may observe by the end of this proof). Recall from Equation (2.2) that LC(E) = { α1v1 + · · · + αdvd : αj ∈ C satisfying αs1 = αs2 for all vs1 = vs2 and αj ∈ R otherwise } . Owing to the hypotheses on the spectral radius in the statement of the theo- rem, we have that for every x ∈LC(E), the sequence { ‖Akωx‖ } k≥1 is bounded. In fact,∥∥Akωx∥∥ = ∥∥α1Akωv1 + · · · + αdAkωvd∥∥ ≤ ∣∣α1∣∣∥∥v1∥∥ + · · · + ∣∣αd∣∣∥∥vd∥∥ . Let q1, . . . ,qN be positive integers that satisfies the outcome of Theorem 1.1, for the matrices A1, . . . ,AN respectively. For some p > N, let ω be a p-lettered word in Σ p N such that for all 1 ≤ r ≤ N, there exists 1 ≤ j ≤ p such that ωj = r. Define q to be the least common multiple of the numbers {q1, . . . ,qN}. For every s ∈ { 1, . . . ,d } and r ∈ { 1, . . . ,N } , we enumerate the following possibilities that can occur for the values of λ(r,s): Case 1. ( λ(r,s) )q = 1 for every r and for some s with λ(r,s) ∈ R. This implies that the corresponding eigenvector vs lies in LC(E). Case 2. ( λ(r,s) )q = 1 for every r and for some s with λ(r,s) ∈ C. This implies that there exists eigenvectors vs and vs with corresponding eigenvalues conjugate to each other such that αsvs + αsvs lies in LC(E). Case 3. ∣∣λ(r,s)∣∣ < 1 for some s and for some r. In this case, the iterates of vs under the map Aω goes to 0; that is, lim k→∞ Akωvs = 0. For any x ∈LC(E) that can be written as x = α1v1 + · · ·+ αdvd, we have lim k→∞ Akqω x = α1 lim k→∞ ( λ(ω,1) )kq v1 + · · · + αd lim k→∞ ( λ(ω,d) )kq vd = α1v1 + · · · + ακvκ =: ξ(x,ω). Observe that ξ(x,ω) and q are independent of the length of the word and in fact, the word ω itself. We denote ξ(x,ω) = ξx. Moreover, ξx ∈ LC(E). dynamics of matrix products 233 Further, for ξx = α1v1 + · · · + ακvκ and 1 ≤ r ≤ N, we have Aqrr ξx = α1A qr r v1 + · · · + ακA qr r vκ = α1 ( λ(r,1) )qr v1 + · · · + ακ ( λ(r,κ) )qr vκ = ξx . Since ξx is a periodic point of A1, . . . ,AN with periods q1, . . . ,qN respectively, we have ξx to be a periodic point of Aω with period q. We now state a corollary to Theorem 2.1, where we include conditions in the hypotheses that ensures LC(E) = Rn. The corollary can be proved analogously to the above theorem. However, we present a simpler proof in this case. Corollary 4.2. In addition to the hypothesis of Theorem 2.1, assume that the considered collection of matrices is pairwise commuting with each matrix being diagonalizable over C. Then, for any x ∈ Rn, the same conclu- sion, as in Theorem 2.1 holds. Proof. We first observe that the extra hypotheses in the statement of the corollary ensures that the matrices A1, . . . ,AN are simultaneously diag- onalizable, i.e., there exists a nonsingular matrix Q such that the matrix Ar = Q −1DrQ, for Dr = diag ( λ(r,1),λ(r,2), . . . ,λ(r,n) ) , where λ(r,s) are the eigenvalues of the matrix Ar, arranged in non-increasing modulus. Let ω ∈ ΣpN with corresponding matrix product Aω. Then, Aω = Q −1DωQ. By Theorem 4.1, we have λ q (r,s) = 1 for every eigenvalue of modulus 1. Hence, for every x ∈ Rn, we have lim k→∞ Akqω x = lim k→∞ ( Q−1Dkqω Q ) x = ξx , where ξx is a periodic point of Ar for every 1 ≤ r ≤ N and is given by ξx = α1v1 + . . . + ακvκ, the definition of κ, as in the proof of Theorem 2.1. We now show that the diagonalizability condition can not be weakened in the hypothesis of Corollary 4.2. However, since the matrices in the collec- tion { A1, . . . ,AN } commute pairwise, we still obtain a collection of common eigenvectors, E = { v1, . . . ,vd } . For example, consider the following pair of non-diagonalizable, commuting matrices: A1 = [ 1 1 0 1 ] and A2 = [ 1 2 0 1 ] . 234 s. jayaraman, y.k. prajapaty, s. sridharan Observe that e1 = (1, 0) t is a common eigenvector for A1 and A2, whereas e2 = (0, 1) t is a common generalized eigenvector for A1 and A2. We further note that for any word ω that contains both the letters, the orbit of e2 under Aω is unbounded while that of e1 is bounded. Remarks 4.3. A few remarks are in order. (a) If all the matrices A1, . . . , AN are pairwise commuting nonnegative symmetric matrices, subject to the spectral radius assumption in Corollary 4.2, then the periods of all the periodic points corresponding to the eigen- values 1 and −1 for all Ar’s is at most 2. Hence, for a matrix product Aω corresponding to a word ω, we have q = 2. (b) We have proved Theorem 2.1 for a special choice of ω that contains all the N letters. Suppose ω′ is any arbitrary p-lettered word. Then we can take the appropriate subset of { 1, . . . , N } , whose members have been used for the writing of the word ω′ and the same result as above follows for ω′. Suppose the p-lettered word ω = (rr · · ·r) for some 1 ≤ r ≤ N. Then the above theorem reduces to a particular case, as one may find in [7, 9]. We now state the same as a corollary. Corollary 4.4. Let A be an n×n matrix with nonnegative entries that is diagonalizable over C and of spectral radius 1. Then there exists an integer q ≥ 1 such that for every x ∈ Rn, we have lim k→∞ Aqkx = ξx, where ξx is a periodic point of A with its period dividing q. 5. Examples In this section, we provide several examples that illustrate the various results, that have been proved until now. We first fix a few notations. We denote the standard basis vectors of Rn by e1, . . . ,en, while In denotes the identity matrix of order n. We write the permutation matrices of order n in column partitioned form denoted by Pn; for instance, we denote the 2 × 2 permutation matrix [ e2 |e1 ] by P2. The matrix of 1’s (of order n) is denoted by Jn. The diagonal matrix of order n with diagonal entries d1, . . . ,dn is denoted by diag(d1, . . . ,dn). Our first example is a fairly simple one and illustrates the scenario in Corollary 4.4. Example 5.1. Consider the diagonalizable matrix A = P2 with spectral radius 1. If x = e2 ∈ R2, then observe that dynamics of matrix products 235 Akx = { e2 , if k is even , e1 , if k is odd . In this example, we obtain q = 2. The next two examples illustrate Corollary 4.2. The first one involves a pair of 6 × 6 commuting nonnegative matrices. Example 5.2. Consider A1 = [ P4 0 0 A (22) 1 ] where P4 = [ e4 |e1 |e2 |e3 ] and A (22) 1 = 1 3 [ 1 2 2 1 ] ; A2 = [ A (11) 2 0 0 A (22) 2 ] where A (11) 2 = 1 2   0 1 0 1 1 0 1 0 0 1 0 1 1 0 1 0   and A (22) 2 = 1 10 [ 3 √ 7√ 7 3 ] . It can be easily seen that the matrices A1 and A2 commute and are di- agonalizable over C and therefore, are simultaneously diagonalizable. The following table gives the common eigenvectors of A1 and A2 and the corre- sponding eigenvalues of the matrices A1 and A2. Eigenvectors v1 v2 v3 v4 v5 v6 Eigenvalues of A1 1 −1 −i i 1 −1/3 Eigenvalues of A2 1 −1 0 0 λ1 λ2 where λ1 = 3 + √ 7 10 and λ2 = 3 − √ 7 10 . The common eigenvectors are given by v1 = (1, 1, 1, 1, 0, 0) t , v2 = (1,−1, 1,−1, 0, 0)t , v3 = (1, i,−1,−i, 0, 0)t, v4 = (1,−i,−1, i, 0, 0)t , v5 = (0, 0, 0, 0, 1, 1) t , v6 = (0, 0, 0, 0, 1,−1)t . 236 s. jayaraman, y.k. prajapaty, s. sridharan Following the lines of the proof of Corollary 4.2, we consider the nonsingular matrix (written in column partitioned form) Q = [ v1| · · · |v6 ] . Then, for r = 1, 2 we have Ar = QDrQ −1, where Dr is the diagonal matrix consisting of the eigenvalues of Ar. Looking at the table of eigenvalues, one can see that q1 = 4 and q2 = 2. Consider any x ∈ R6 given by x = 6∑ i=1 αivi where α1,α2,α5,α6 ∈ R and α3,α4 ∈ C with α3 = α4. For any word Aω that contains both A1 and A2 , we have lim k→∞ A4kω x = ( Q lim k→∞ D4kω Q −1 ) x = α1v1 + α2v2 , a periodic point of Aω with period at most 2, that divides the least common multiple of q1 and q2. We now present another pair of commuting and diagonalizable matrices, this time in R7, where we exhibit a periodic point of Aω, whose period is equal to the least common multiple of the relevant qi’s. Example 5.3. Let A1 =  I3 0 00 P2 0 0 0 D1   where D1 = diag (1 2 , 1 3 ) ; A2 =  P3 0 00 I2 0 0 0 D2   where D2 = diag (1 5 , 1 6 ) and P3 = [ e3 |e1 |e2 ] . As earlier, we write a table with the common eigenvectors and the corre- sponding eigenvalues for the matrices A1 and A2. Eigenvectors v1 v2 v3 v4 v5 v6 v7 Eigenvalues of A1 1 1 1 1 −1 1/2 1/3 Eigenvalues of A2 1 ω ω 2 1 1 1/5 1/6 where ω is the cubic root of unity and the vi’s are v1 = (1, 1, 1, 0, 0, 0, 0) t , v2 = (1,ω,ω 2, 0, 0, 0, 0)t , v3 = (1,ω 2,ω, 0, 0, 0, 0)t , v4 = (0, 0, 0, 1, 1, 0, 0) t , v5 = (0, 0, 0, 1,−1, 0, 0)t , v6 = e6 , v7 = e7 . dynamics of matrix products 237 In this example, we have q1 = 2 and q2 = 3. Consider x ∈ R7 given by x = ∑7 i=1 αivi where α1,α4,α5,α6,α7 ∈ R, α2,α3 ∈ C with α2 = α3 and α2,α5 being non-zero. Then, for the word Aω = A p1 1 A p2 2 with p1 6= 0 (mod 2) and p2 6= 0 (mod 3), we have lim k→∞ A6kω x = ξx, a periodic point of Aω of period 6, the least common multiple of q1 and q2. If p1 violates the above condition, then the period of ξx is 3; if p2 violates the above condition, then the period of ξx is 2 and if both p1 and p2 violate the above conditions, then the period of ξx is 1, all three numbers being factors of the least common multiple of q1 and q2. We now write two examples in the non-commuting set up that illustrates Theorem 2.1. Example 5.4. Let A1 = [ P4 0 0 A′1 ] where A′1 = [ 1/5 1/6 1/6 1/5 ] and P4 = [ e4 |e1 |e2 |e3 ] ; A2 =  P2 0 00 P2 0 0 0 A′2   where A′2 = [1/7 1/81/7 1/8 ] . Observe that the matrices A1 and A2 do not commute, but partially com- mute giving rise to the existence of a set of common eigenvectors that are given by v1 = (1, 1, 1, 1, 0, 0) t , v2 = (1,−1, 1,−1, 0, 0)t , v3 = (0, 0, 0, 0, 1, 1)t . The corresponding eigenvalues of the respective matrices are given in the following table. Eigenvectors v1 v2 v3 Eigenvalues of A1 1 −1 1/5 + 1/6 Eigenvalues of A2 1 −1 1/7 + 1/8 In this case, we obtain q1 = 4 and q2 = 2. Here, LC(E) ( { x ∈ R6 : sup ∥∥∥Akωx∥∥∥ < ∞} , 238 s. jayaraman, y.k. prajapaty, s. sridharan where Aω contains both A1 and A2. Suppose x ∈ LC(E) given by x = α1v1 + α2v2 + α3v3 for αi ∈ R. Then, we have lim k→∞ A4kω x = ξx, a periodic point of Aω, however with period at most 2, that divides the least common multiple of q1 and q2. As in the commuting case, we now present an example in the non-com- muting setup and exhibit a periodic point for a particular choice of Aω whose period is equal to the least common multiple of the appropriate qi’s. Example 5.5. Let A1 =  I3 0 00 P2 0 0 0 1 2 J2   and A2 =  P3 0 00 I2 0 0 0 A′2   , where A′2 = [1/3 1/41/3 1/4 ] and P3 is the permutation matrix as defined in Example 5.3. In this example, A1 and A2 form a Laffey pair. They have the following six common eigenvectors: v1 = ( 1,ω,ω2, 0, 0, 0, 0 )t , v2 = ( 1,ω2,ω, 0, 0, 0, 0 )t , v3 = (1, 1, 1, 0, 0, 0, 0) t , v4 = (0, 0, 0, 1, 1, 0, 0) t , v5 = (0, 0, 0, 1,−1, 0, 0)t , v6 = (0, 0, 0, 0, 0, 1, 1)t . As earlier, we write the corresponding the eigenvalues of the matrices in the following table: Eigenvectors v1 v2 v3 v4 v5 v6 Eigenvalues of A1 1 1 1 1 −1 1 Eigenvalues of A2 ω ω 2 1 1 1 1/3 + 1/4 Here, q1 = 2 and q2 = 3. Let Aω be a matrix product such the matrix Ar occurs pr times in Aω and satisfies p1 6= 0 (mod 2) and p2 6= 0 (mod 3). For any vector x = α1v1 + · · · + α6v6 with α1,α2 ∈ C satisfying α1 = α2, α3,α4,α5,α6 ∈ R and α1,α5 being non-zero, we obtain lim k→∞ A6kω x = ξx, a periodic point of Aω of period 6, the least common multiple of q1 and q2. If p1 violates the above condition, then the period of ξx is 3; if p2 violates the above condition, then the period of ξx is 2 and if both p1 and p2 violate the above conditions, then the period of ξx is 1, all three numbers being factors of the least common multiple of q1 and q2. dynamics of matrix products 239 At this juncture, we write one more example that showcases the depen- dence of the limiting periodic point on the word ω, in the non-commuting set-up, even when the non-common eigenvectors have a bounded orbit. Example 5.6. Let A1 = 1 3 [ 1 2 2 1 ] and A2 = 1 5 [ 1 4 2 3 ] . It can be easily seen that A1A2 6= A2A1. The eigenvalues of A1 and A2 are 1,−1 3 and 1,−1 5 respectively. The vector (1, 1)t is a common eigenvector for A1 and A2 corresponding to the eigenvalue 1. Moreover, the eigenvalues of A1A2 are 1, 1 15 (and so the same is true for A2A1). It easily follows from this that any x ∈ R2 has a bounded orbit. The eigenvector corresponding to the eigenvalue −1 3 for A1 is (1,−1)t and the eigenvector corresponding to the eigenvalue −1 5 for A2 is (2,−1)t. Note that (A1A2) k − (A2A1)k = [ −α(k) α(k) −α(k) α(k) ] for k ≥ 1 , and therefore the commutator has rank 1, making this a Laffey pair. It is now obvious that lim k→∞ (A1A2) k (1, 1)t = lim k→∞ (A2A1) k (1, 1)t since ( (A1A2) k − (A2A1)k ) (1, 1)t = (0, 0)t. Nevertheless, ( (A1A2) k − (A2A1)k ) (2,−1)t = −3α(k)(1, 1)t whereas( (A1A2) k − (A2A1)k ) (1,−1)t = −2α(k)(1, 1)t . Therefore, if x is one of the points (2,−1)t or (1,−1)t, then, lim k→∞ (A1A2) k x 6= lim k→∞ (A2A1) k x, since lim k→∞ α(k) 6= 0. It is possible to study these examples under the action of the appropriate non-homogeneous map, described in Corollary 2.2. We conclude this section by describing another way of writing Theorem 2.1. Recall that Σ∞N denotes the set of all infinite-lettered words on the set of symbols { 1, . . . ,N } . Considering the Cartesian product of the sym- bolic space Σ∞N and R n, one may describe the dynamical system discussed 240 s. jayaraman, y.k. prajapaty, s. sridharan in this paper thus: Given a collection { A1, . . . ,AN } of n × n matrices, let T : X = Σ∞N × R n → X be defined by T(τ, x) = (στ,Aτ1x) where τ =( τ1τ2τ3 · · · ) and σ is the shift map defined on Σ∞N by (στ)n = τn+1 for n ≥ 1. We equip X with the corresponding product topology and study T as a non-invertible map. Theorem 5.7. Let { A1, . . . ,AN } , N < ∞, be a collection of n × n ma- trices that satisfy the hypotheses of Theorem 2.1. Suppose E denotes the set of all common eigenvectors of the collection of matrices. Let τ ∈ Σ∞N be any arbitrary infinite lettered word that encounters all the N letters within a finite time, say m. Let { τ [p] } p≥m be a sequence of infinite-lettered words that converges to τ. Let Aτ[p] be the matrix associated to the p-lettered word τ [p] ∈ ΣpN . Then, for every p ≥ m and any vector x ∈LC(E), there exists an integer q ≥ 1 such that lim k→∞ Tkpq ( τ [p],x ) = ( τ [p],ξx ) , where ( τ [p],ξx ) is a periodic point of T, whose period divides the least com- mon multiple of p and q. 6. Words of infinite length We conclude this paper with this final section where we write the proof of Theorem 2.3. Recall that the hypotheses of Theorem 2.3 and Corollary 4.2 are one and the same. Proof of Theorem 2.3. Recall from Equation (2.5) that whenever x ∈ LC(E) = Rn (in this case), we have Ãτ[p]x = ( ξx ,Aτ[p]ξx , . . . ,A q−1 τ[p] ξx ) , for p ≥ m, where m is the finite stage by when the word τ [p] encounters all the letters in { 1, 2, . . . ,N } . In general, it is not necessary that Aτ[m]ξx = Aτ[m+1]ξx. However, owing to ξx being a periodic point of Aτ[p] for p ≥ m, whose period divides q (> 1, say), a simple application of the pigeonhole principle ensures Aτ[m]ξx = Aτ[m′]ξx, for some m ′ > m. We choose m′ that guarantees Ãτ[m]x = Ãτ[m′]x, as vectors in (R n) q . dynamics of matrix products 241 Proceeding along similar lines, one obtains an increasing sequence, say {pγ} such that { Ã τ[pγ] x } γ≥1 is a constant sequence of vectors in (Rn)q for every x ∈ Rn. Thus, for any two integers pγk and pγk′ from the sequence {pγ}, we have A j τ [pγk ]ξx = A j τ [pγ k′ ]ξx for every 0 ≤ j ≤ q − 1. Since ξx = κ∑ s=1 αsvs, we obtain λ( τ [pγk ] ,s ) = λ( τ [pγ k′ ] ,s ) for every s ∈ I(x) ∩{1, . . . ,κ} . This implies that for every s ∈ I(x) ∩{1, . . . ,κ}, we have N∏ r=1 λ Φ(τ,r)(pγk ) (r,s) = N∏ r=1 λ Φ(τ,r)(pγk′ ) (r,s) ⇐⇒ N∏ r=1 λ Φ(τ,r)(pγk )−Φ(τ,r)(pγk′ ) (r,s) = 1 . Since the numbers λ(r,s)’s are q-th roots of unity, we obtain positive integers Λ(r,s) that satisfies N∑ r=1 Λ(r,s) [ Φ(τ,r)(pγk) − Φ(τ,r)(pγk′) ] ≡ 0 (mod q) for all s ∈ I(x) ∩ { 1, . . . ,κ } . As pointed out after the statement of Theorem 2.3 in Section 2 and as one may observe from the proof above, { Ã τ[pγ] x } γ≥1 is constructed to be a constant sequence, thus proving Corollary 2.4. Acknowledgements The authors are thankful to the anonymous referee for suggestions that improved the exposition in contextualising Theorem 2.3 and Corol- lary 2.4 in this manuscript. References [1] M. Akian, S. Gaubert, B. Lemmens, R.D. Nussbaum, Iteration of order preserving subhomogeneous maps on a cone, Math. Proc. Cambridge Philos. Soc. 140 (2006), 157 – 176. [2] M. Akian, S. Gaubert, B. Lemmens, Stability and convergence in dis- crete convex monotone dynamical systems, J. Fixed Point Theory Appl. 9 (2011), 295 – 325. 242 s. jayaraman, y.k. prajapaty, s. sridharan [3] J. Bernik, R. Drnovsek, T. Kosir, T. Laffey, G. MacDonald, R. Meshulam, M. Omladic, H. Radjavi, Common fixed points and common eigenvectors for sets of matrices, Linear Multilinear Algebra 53 (2005), 137 – 146. [4] M.P. Drazin, J.W. Dungey, K.W. Grunberg, Some theorems on com- mutative matrices, J. London Math. Soc. 26 (1951), 221 – 228. [5] R.A. Horn, C.R. Johnson, “ Matrix Analysis ”, Second edition, Cambridge University Press, Cambridge, 2013. [6] B.P. Kitchens, “ Symbolic Dynamics: One-sided, Two-sided and Countable State Markov Shifts ”, Universitext, Springer-Verlag, Berlin, 1998. [7] B. Lemmens, Nonlinear Perron-Frobenius theory and dynamics of cone maps, in “ Positive Systems ”, Lect. Notes Control Inf. Sci., 341, Springer, Berlin, 2006, 399 – 406. [8] B. Lemmens, R.D. Nussbaum, “ Nonlinear Perron-Frobenius Theory ”, Cambridge Tracts in Mathematics, 189, Cambridge University Press, Cam- bridge, 2012. [9] R.D. Nussbaum, S.M. Verduyn Lunel, Generalizations of the Perron- Frobenius theorem for nonlinear maps, Mem. Amer. Math. Soc. 138 (1999), no. 659, viii+98. [10] D. Shemesh, Common eigenvectors of two matrices, Linear Algebra Appl. 62 (1984), 11 – 18. [11] X. Wang, Z. Cheng, Infinite products of uniformly paracontracting matri- ces, Linear Multilinear Algebra 64 (2016), 856 – 862. [12] X. Zhan, “ Matrix Theory ”, Graduate Studies in Mathematics, 147, American Mathematical Society, Providence, RI, 2013. Introduction Main results Common eigenvectors for a collection of matrices Proof of Theorem 2.1 Examples Words of infinite length