RATIO MATHEMATICA 25 (2013), 77–94 ISSN:1592-7415 Hypermatrix Based on Krasner Hypervector Spaces ∗Maedeh Motameni, Reza Ameri, Razieh Sadeghi ∗ Department of Mathematics, Qaemshahr Branch, Islamic Azad University,Qaemshahr, Iran, School of Mathematics, Statistics and Computer Sciences, University of Tehran, Tehran, Iran, Faculty of Mathematics, University of Mazandaran, Babolsar, Iran motameni.m@gmail.com,rameri@ut.ac.ir,razi$\ $sadeghi@yahoo.com Abstract In this paper we extend a very specific class of hypervector spaces called Krasner hypervector spaces in order to obtain a hypermatrix. For reaching to this goal, we will define dependent and independent vectors in this kind of hypervector space and define basis and dimen- sion for it. Also, by using multivalued linear transformations, we ex- amine the possibility of existing a free object here. Finally, we study the fundamental relation on Krasner hypervector spaces and we define a functor. Key words: Hypermatrix, Hypervector spaces, Basis of a hyper- vector space, Multivalued linear transformations. MSC2010: 15A33. 1 Introduction The notion of a hypergroup was introduced by F. Marty in 1934 [5]. Since then many researchers have worked on hyperalgebraic structures and developed this theory (for more details see [2],[3]). Using hyperstructures ∗ Corresponding Author 77 M. Motameni, R. Ameri, R. Sadeghi theory, mathematicians have defined and studied variety of algebraic struc- tures. Among them the notion of hypervector spaces has been studied mainly by Vougiuklis [8, 9], Tallini [6, 7] and Krasner [3].(see also [1]). There are differences mainley about operation or a hyperoperation in these three type of hypervector spaces. Vougiuklis has studied HV vector spaces which deals with a very weak condition regarding intersections. Tallini defined a hyper- vector spaces considering a crisp sum and using a hyperexternal operation which assigns to the production every element of a field and every element of the abelian geroup (V, +), a non empty subset of V , while Krasner in the definition of a hypervector space used a hypersum to make a canonical hypergroup and by using a singlevalued operation he defined the Krasner hypervector space with some definitions. In this paper we have chosen the definition of Krasner and we defined the generalized subset of it. Also, to make a correct logical relation between defi- nitions we had to define the notion of a multivalued linear transformation and by using this notion we could talk about basis and dimension of a Krasner hypervector space. In the sequel, considering the multivalued functions, we have constructed a kind of matrix whith hyperarrays with coefficients taken from the hyperfield of Krasner and elements of the basis. Also, we studied the notion of singular and nonsingular transformations. Finally, we stud- ied the category of Krasner hypervector spaces and defines the fundamental relation on it. In the last part we have defines a functor. 2 Preliminaries In this section we present definitions and properties of hypervector spaces and subsets, that we need for developing our paper. A mapping ◦ : H × H −→ P∗(H) is called a hyperoperation (or a join operation), where P∗(H) is the set of all non-empty subsets of H. The join operation is extended to subsets of H in natural way, so that A◦B is given by A◦B = ⋃ {a◦ b : a ∈ A and b ∈ B} The notations a◦A and A◦a are used for {a}◦A and A◦{a}, respectively. Generally, the singleton {a} is identified by its element a. A hypergroupoid (H,◦), which is associative, i. e, x◦(y◦z) = (x◦y)◦z , ∀x,y,z ∈ H is called a semihypergroup. A hypergroup is a semihypergroup such that for all x ∈ H, we have x ◦ H = H = H ◦ x, which is called reproduction axiom. 78 Hypermatrix Based on Krasner Hypervector Spaces Definition 2.1. [3] A semihypergroup (H, +) is called a canonical hyper- group if the following conditions are satisfied: (i)x + y = y + x,∀x,y ∈ R; (ii)∃0 ∈ R(unique) such that for every x ∈ R,x ∈ 0 + x = x; (iii) for every x ∈ R , there exists a unique element, say x́ such that 0 ∈ x + x́.(we denote x́ by -x); (iv) for every x,y,z ∈ R, z ∈ x + y ⇐⇒ x ∈ z −y ⇐⇒ y ∈ z −x. from the definition it can be easily verified that −(−x) = x and −(x + y) = −x−y. Definition 2.2. [3] A Krasner hyperring is a hyperstructure (R,⊕,?) where (i) (A,⊕) is a canonical hypergroup; (ii) (A,?) is a semigroup endowed with a two-sided absorbing element 0; (iii) the product distributes from both sides over the sum. A hyperfield is a Krasner hyperring (K,⊕,?), such that (K −{0},?) is a group. Definition 2.3. [3] Let (K,⊕,?) be a hyperskewfield and (V,⊕) be a canon- ical hypergroup. We define a Krasner hypervector space over K to be the quadrupled (V,⊕, ·,K), where ” · ” is a single-valued operation · : K ×V −→ V, such that for all a ∈ K and x ∈ V we have a · x ∈ V , and for all a,b ∈ K and x,y ∈ V the following conditions hold: (H1) a · (x⊕y) = a ·x⊕a ·y; (H2) (a⊕ b) ·x = a ·x⊕ b ·x; (H3) a · (b ·x) = (a ? b) ·x; (H4) 0 ·x = 0; (H5) 1 ·x = x. We say that (V,⊕, ·,K) is anti-left distributive if for all a,b ∈ K, x ∈ V, (a + b) · x ⊇ a · x + b · x, and strongly left distributive, if for all a,b ∈ K, x ∈ V, (a⊕ b) ·x = a ·x⊕ b ·x, In a similar way we define the anti-right distributive and strongly right dis- tributive hypervector spaces, respectively. V is called strongly distributive if it is both strongly left and strongly right distributive. In the sequel by a hypervector space we mean a Krasner hypervector space. 79 M. Motameni, R. Ameri, R. Sadeghi 3 Krasner Subhypervector Space Here we study some basic results of Krasner hypervector spaces and after defining the category of Krasner hypervector spaces, we continue to find a free object in the category of Krasner hypervector spaces. Definition 3.1. A nonempty subset S of V is a subhyperspace if (S,⊕) is a canonical subhypergroup of V and for all a ∈ K, x ∈ S, we have a ·x ∈ S. Here we present example of a Krasner hypervector spaces. Example 3.2. Let F be a field , V be a vector space and F∗ be a multi- plicative subgroup of F . For all x,y ∈ V we define the equivalence relation ∼ on V as follows: x ∼ y ⇐⇒ x = ty t ∈ F∗ Now, let V̄ be the set of all classes of V modulo ∼. V̄ together with the hypersum ⊕, construct a canonical hypergroup: x̄⊕ ȳ = {v̄ ∈ V̄ | v̄ ⊆ x̄⊕ ȳ} Here we consider the external composition · : f̄ × V̄ −→ V̄ ā · v̄ 7−→ āv Now, (V̄ ,⊕, ·,F) is a hypervector space. Lemma 3.3. Let Vi be a hypervector space, for all i ∈ I, then ⋂ Vi is also a hypervector space. Definition 3.4. Let V be a hypervector spaces and S a nonempty subset of it, then the smallest subhypervector space of V containing S is called linear space generated by S and is denoted by < S >. Moreover, < S >=⋂ S⊆W≤V W . Theorem 3.5. Let V be a hypervector space and S a nonempty subset of it, then < S > = {t ∈ V |t ∈ n∑ i=1 ai ·si,ai ∈ K,si ∈ S,n ∈ N} = = {t1 ⊕ ...⊕ tn|ti = ai ·si}. Proof. Let A = {t ∈ V |t ∈ ∑n i=1 ai · si,ai ∈ K,si ∈ S,n ∈ N}. We claim that (A,⊕, ·,K) is the smallest hypervector space generated by S. First we show that (A,⊕) is a canonical hypergroup. Commutativity is ob- vious. For all x ∈ A, we have x ∈ ∑n i=1 ai ·si. Suppose there exists a scalar identity 80 Hypermatrix Based on Krasner Hypervector Spaces 0A ∈ A such that 0A ∈ ∑n i=1 bi · ri, for bi ∈ K and ri ∈ S, we should have x⊕ 0A = ∑n i=1 ai ·si ⊕ ∑n i=1 bi · ri = ∑n i=1 ai ·si 3 x. Since for all si ∈ A, we have si ∈ S ⊆ V , and (V,⊕) is a canonical hyper- group, then there exists a scalar identity in V called 0V such that si⊕0V = si. Hence in the above equation it is enough to choose bi = ai and ri = 0V , we obtain x⊕0A = ∑n i=1 ai·si⊕ ∑n i=1 ai·0V = ∑n i=1 ai·(si⊕0V ) = ∑n i=1 ai·si 3 x. Now for all x ∈ A we define −x = ∑n i=1 ai · (−si), then we have 0S = ∑n i=1 ai·0V ∈ ∑n i=1 ai·si⊕ ∑n i=1 ai·(−s)i = ∑n i=1 ai·(si⊕(−si)) Hence every element in (A,⊕) has a unique identity. Moreover, every ele- ment in (A,⊕) is reversible, because suppose for all x,y,z ∈ A, we have x = ∑n i=1 ai ·si, y = ∑n i=1 ái · śi, z = ∑n i=1 ´́ai · ´́si. Since for si, śi, ´́si ∈ S ⊆ V , if ´́si ∈ si⊕śi we have śi ∈ ´́si⊕(−si), then it is sufficient to choose ´́ai = ái = ai. Therefore (A,⊕) is a canonical subhypergroup. Now for all t ∈ A, k ∈ K, we have k · t ⊆ k · n∑ i=1 ai ·si = n∑ i=1 (k ? ai) ·si ⊆ A. Then (A,⊕, ·,K) is a subhypervector space of V . Let W be another subhypervector space of V containg S. let t ∈ A, then t ∈ ∑n i=1 ai ·si, for ai ∈ K,si ∈ S,n ∈ N. Since W is a subhypervector space of V containing S, then ∑n i=1 ai ·si ⊆ W and A ⊆ W. So, A is the smallest subhypervector space of V . Also, for all s ∈ S, we have s = 1 ·s, then s ∈ A, therefore S ⊆ A. Definition 3.6. Let (V,⊕, ·), (W,⊕, ·) be two hypervector spaces over a hy- perskewfield K, then the mapping T : V −→ P∗(W) is called (i) multivalued linear transformation if T(x⊕y) ⊆ T(x) ⊕T(y) and T(a ·x) = a ·T(x). (ii) multivalued good linear transformation if T(x⊕y) = T(x) ⊕T(y) and T(a ·x) = a ·T(x). where, P∗(W) is the nonempty power set of W. From now on, by mv- linear transformation we mean a multivalued linear transformation. Remark 3.7. We define T(0V ) = 0W . Definition 3.8. Let V,W be two hypervector spaces over a hyperskewfield K, and T : V −→ P(W) be a mv-linear transformation. Then the kernel of T is denoted by kerT and defined by KerT = {x ∈ V | 0W ∈ T(x)} 81 M. Motameni, R. Ameri, R. Sadeghi Theorem 3.9. Let V,W be two hypervector spaces on a hyperskewfield K and T : V −→ W be a linear transformation. Then KerT is a subhypervector space of V . Proof. By Remark 4.18, we have T(0V ) = 0W which means that 0V ∈ KerT and KerT 6= ∅, then we have x ∈ x⊕ 0V = x, for all x ∈ KerT . The other properties of a canonical subhypervector space will inherit from V . Theorem 3.10. Let V,U be two hypervector spaces and T : V −→ P∗(U) be a mv-linear transformation : (i) if W is a subhypervector space of V , then T(W) is also a subhyper- vector space of U. (ii) if L is a subhypervector space of U, then T−1(L) is also a subhyper- vector space of V containing kerT . Proof. (i) Let a ∈ K and x́, ý ∈ T(W), such that x́ = T(x), ý = T(y) for some x,y ∈ W. Then x́ ⊕ ý = T(x) ⊕ T(y) = T(y) ⊕ T(x) = ý ⊕ x́, hence commutativity holds. For all x ∈ V we have x = x⊕ 0V , then we obtain T(x) = T(x⊕ 0V ) ⊆ T(x) ⊕T(0V ) = T(x) ⊕ 0U . Also, for all x ∈ V , there exists x́ = −x ∈ V such that 0V ∈ x⊕ (−x). By Remark 4.18 we have 0U = T(0V ) ∈ T(x⊕ (−x)) ⊆ T(x) ⊕T(−x) = x́⊕ x́. where x́ = T(−x) is the unique inverse of x́. Now suppose for all x,y,z ∈ V we have x ∈ y ⊕z =⇒ y ∈ x⊕ (−z) This is equivalent to T(x) ∈ T(y⊕z) ⊆ T(y)⊕T(z) =⇒ T(y) ∈ T(x)⊕T(−z). So, (T(W),⊕) is a canonical hypergroup. Now for a ∈ K and x́ ∈ T(W), we have a · x́ = a ·T(x) = T(a ·x) ⊆ T(W). Hence, (T(W),⊕, ·), is a subhypervector space of V . (ii) let a ∈ K and x,y ∈ T−1(L). Suppose x́ = T(x), ý = T(y), for x́, ý ∈ L. Since (U,⊕) is a canonical hypergroup, then we have x⊕y = T−1(x́) ⊕T−1(ý) = T−1(ý) ⊕T−1(x́) = y ⊕x. Also, we have x⊕ 0V = T−1(x́) ⊕T−1(0U ) ⊇ T−1(x́⊕ 0U ) ⊇ T−1(x́) = x. for all x́ ∈ V , there exists −́x such that 0U ∈ x́⊕(−́x), hence for x ∈ T−1(x́), there exists T−1(−x́) ∈ T−1(L) such that x⊕ (−x) = T−1(x́) ⊕T−1(−x́) = T−1(x⊕ (−x́)) = T−1(0U ) = 0V . Now for all x́, ý, ź ∈ L, we have x́ ∈ ý ⊕ ź =⇒ ý ∈ x́⊕ (−ź) 82 Hypermatrix Based on Krasner Hypervector Spaces Suppose x,y,z ∈ T−1(L). The above relation is equivalent to y ⊕z = T−1(ý) ⊕T−1(ź) ⊇ T−1(ý ⊕ ź) ⊇ T−1(x́) = x =⇒ x⊕ (−z) = T(x́) ⊕T(−ź) ⊇ T(x́⊕ (−ź)) ⊇ T(ý) = y. which means that x ∈ y ⊕z =⇒ y ∈ x⊕ (−z). Moreover, a ·x = a ·T−1(x́) = T−1(a · x́) ⊆ T−1(L). Hence (T−1(L),⊕, ·) is a subhypervector space of V . Now for x ∈ KerT we have T(x) = 0U ∈ L, then we obtain x ∈ T−1(L), hence KerT ⊆ T−1(L). Theorem 3.11. Let U, V be two hypervector spaces on a hyperskewfield K and T : V −→ P∗(U) be a good linear transformation. Then there is a one to one correspondence between sunhypervector spaces of V containing KerT and subhypervector spaces of U. Proof. Suppose A = {W|W ≤ V, W ⊇ KerT} and B = {L|L ≤ U}. We show that the following map is one to one and onto: φ : A −→ B W −→ T(W) By Theorem 3.10, T(W) belongs to B, for all W ∈ A. Now let W1,W2 be two elements of A such that W1 6= W2, then there exists w1 ∈ W1 − W2 or w2 ∈ W2 −W1. Let w1 ∈ W1 −W2, then T(w1) ∈ T(W1)−T(W2) and hence T(W1) 6= T(W2). If w2 ∈ W2 − W1, then T(W1) 6= T(W2), too. So, φ is well defined and one to one. Now for L ∈ B, put W = T−1(L), then by Theorem 3.9 we have W ∈ A and T(W) = L. Therefore, φ is onto, hence the result. 4 Construction of a hypermatrix Now, we will talk about the basis of a hypervector space and verify that considering a multivalued linear transformation will imply some conditions to this definition. Finally, with the elements of hyperfield and basis we will construct a hypermatrix. Definition 4.1. A subset S of V is called linearly independent if for every vectors v1, ...,vn ∈ S, and c1, ...,cn ∈ K, if we have 0V ∈ c1 ·v1 ⊕ ...⊕ cn ·vn , implies that c1 = ... = cn = 0K. Otherwise S is called linearly dependent. Theorem 4.2. Let V be a hypervector space and v1, ...,vn be independent in V . Then every element in the linear space < v1, ...,vn > belongs to a unique sum of the form ∑n i=1 ai ·vi where ai ∈ K. 83 M. Motameni, R. Ameri, R. Sadeghi Proof. Every element of < v1, ...,vn > belongs to a set of the form ∑n i=1 ai ·vi where ai ∈ K. We will show that this form is unique. Let u ∈ V such that u ⊆ ∑n i=1 ai·vi and u ⊆ ∑n i=1 bi·vi, where ai,bi ∈ K. Since V is a hypervector space we have : 0V ∈ u−u ⊆ ∑n i=1 ai·vi− ∑n i=1 bi·vi = ∑n i=1 ai·vi⊕ ∑n i=1(−b)i·vi. Therefore, 0V ⊆ ∑n i=1(ai⊕(−bi)) ·vi. And since v1, ...vn are independent we have ai ⊕ (−bi) = 0, ∀i, then ai = −(−bi) = bi. Theorem 4.3. Let V be a hypervector space. Then vectors v1, ...vn ∈ V are independent or vj for some 1 ≤ j ≤ r, belongs to the linear combination of the other vectors. Proof. Let v1, ...,vnbe dependent and let 0V ⊆ ∑n i=1 ai ·vi such that at least one of the scalars such as aj is not zero. Then there exists ti, (i = 1, ...,n) such that 0V ∈ t1 ⊕ t2 ⊕ ...⊕ tn, where ti = ai ·vi, which means that tj ∈ 0 ⊕ (−(t1 ⊕ ...⊕ tj−1 ⊕ tj+1 ⊕ ...⊕ tn)) =⇒ tj ∈ 0 ⊕ ((−t1) ⊕ ...⊕ (−tj−1) ⊕ (−tj+1) ⊕ ...⊕ (−tn)) Moreover, for at least one vj we have vj = (a −1 j ) · tj. which means vj ∈ (a−1j ) · (−t1 ⊕ ...⊕ (−tj−1) ⊕ (−tj+1) ⊕ ...⊕ (−tn)) ∈ ∈ ((a−1j ) · (−t1)) ⊕ ((a −1 j ) · (−tj−1)) ⊕ ((a −1 j ) · (−tj+1)) ⊕ ...⊕ ((a −1 j ) · (−tn)) ∈ ((a−1j ) · (−a1 ·v1)) ⊕ ...⊕ ((a −1 j ) · (−a1 ·vj−1)) ⊕ ((a −1 j ) · (−aj+1 ·vj+1)) ⊕ ...⊕ ((a−1j ) · (−an · tn)) ∈ ((a−1j ? (−a1)) ·v1) ⊕ ...⊕ ((a −1 j ? (−a1)) ·vj−1) ⊕ ((a −1 j ? (−aj+1)) ·vj+1) ⊕ ...⊕ ((a−1j ? (−an)) · tn) ∈ (c1 ·v1) ⊕ ...⊕ (cj ·vj−i) ⊕ (cj ·vj+1) ⊕ ...⊕ (cj ·vn) where cj = (a −1 j ? (−an)). Therefore vj belongs to a linear combination of v1, ...,vj−1,vj+1, ...,vn as desired. Definition 4.4. We call β a basis for V if it is a linearly independent subset of V and it spans V . We say that V has finite dimensional if it has a finite basis. The following results are the generalization of the same results for vector spaces, also the methods here are adopted from those in the ordinary vector spaces. 84 Hypermatrix Based on Krasner Hypervector Spaces Theorem 4.5. Let V be a hypervector space. If W is a subhypervector space of V generated by β = {v1, ...,vn}, then W has a basis contained in β. Corolary 4.6. If V is a hypervector space, then every generating subset of V , contains a basis of V , which means every independent subset of V is included in a finite basis. Theorem 4.7. Let V be a hypervector space. If V has a finite basis with n elements, then the number of elements of every independent subset of V is smaller or equal to n. Corolary 4.8. Let V be strongly left distributive and hypervector space. If V is finite dimensional then every two basis of V have the same elements. Lemma 4.9. Let V be a hypervector space. If V is finite dimensional, then every linearly independent subset of V is contained in a finite basis. Now, we want to determine that what is a free object in the category of hypervector spaces. First, notice that if we denote the category of hypervec- tor spaces by KrH-vect, we define the category as follows: (i) the objects in this category are hypervector spaces over a hyperskew field K; (ii) for the objects V,W of KrH-vect, the set of morphisms from V to P∗(W) is the multivalued linear transformations which we show by Home(V , W ). (iii) combination of morphism is defined as usual; (iv) for all objects V in the category, the morphism 1V : V −→ V is the identity. According to the definition of a free object in the category of hypersets [2], and considering the category of hypervector spaces, if X is a basis for the hypervector space V , then we say that F is a free object in KrH-vect then for every function f : X −→ V , there exists a homomorphism f̄ : F −→ V , such that f̄ ◦ i = f, where i is the inclusion function. Now, we have (f̄◦i)(x) = f̄(i(x)) = f̄(x) (?) Since the homomorphism f̄ is defined in H-vect, it is a multivalued trans- formation, then we define f̄(x) = {f(x)} we obtain f̄ ◦ i = f. Let g : F −→ V be another homomorphism such that g(xi) = f(xi), then for t ∈ ∑n i=1 ai ·xi, let f̄ be defined by f̄(t) = ∑n i=1 ai ·f(xi), we have g(t) ⊆ g( n∑ i=1 ai ·xi) = n∑ i=1 ai ·g(xi) = f̄(t). 85 M. Motameni, R. Ameri, R. Sadeghi hence f̄ defined above is the maximum homomorphism such that (?) is sat- isfied. Suppose t ∈ ∑n i=1 ai · xi and t ∈ ∑n i=1 bi · xi, for ai,bi ∈ K, we have f̄(t) = ∑n i=1 ai ·f(xi), and also f̄(t) = ∑n i=1 bi ·f(xi), then ∑n i=1 ai ·f(xi) =∑n i=1 bi ·f(xi), we obtain 0 ∈ ∑n i=1 ai ·f(xi) − bi ·f(xi) = ∑n i=1(ai − bi) ·f(xi) So ai = bi. Therefore, f̄ is a unique mv-transformation. Hence we have the following corollary: Corolary 4.10. Every hypervector space with a basis is a free object in the category of hypervector spaces. Theorem 4.11. Let (V,⊕, ·), (W,⊕, ·) be two hypervector spaces on a hy- perskewfiled K . If T : V −→ P∗(W) and U : V −→ P∗(W) be two mv- transformations. We define L(V,W) = {T|T : V −→ P∗(W)} and the hyperoperation ” � ” as follows: (T �U)(α) = T(α) �U(α) Also, we define the external composition as (c�T)(α) = c�T(α) Then (L(V,W),�,�)) as defined above is a hypervectorspace over a hy- perskewfield K. Proof. The external composition ” � ” is defined as follows: � : K ×L(V,W) −→ P∗(L(V,W)) (α,T) 7−→ α�T First we show that (L(V,W),�) is a canonical hypergroup. Communativity and associativity is obvious. We consider the transformation 0 : V −→ 0 as a ”0” for the group and 1 : V −→ P∗(V ) as the identity. Then there exists a unique inverse (−T) such that 0 ∈ (T � (−T))(α). Now, let T,U,Z be three linear transformations that belong to L(V,W) then if Z ∈ T � U then we have Z(α) ∈ (T � U)(α), which means Z(α) ∈ T(α) � U(α). Now since W is hypervector space then we obtain T(α) ∈ Z(α) � (−U)(α), hence T ∈ Z � (−U),∀α ∈ K. Therefore, (L(V,W),�) is a canonical hypergroup. Now, we check that L(V,W) is a hypervector space. Let x,y ∈ K and T,U ∈ L(V,W) then we have (1) (x� (T �U))(α) = x� (T �U)(α) = (x�T(α)) � (x�U(α)) (2) ((x�y) �T)(α) = ⋃ z∈x�y z �T(α) = (x�T(α)) � (y �T(α)). 86 Hypermatrix Based on Krasner Hypervector Spaces The other conditions will be obtained immediately. Therefore, (L(V,W),�,�) is a hypervector space. Theorem 4.12. Let (V,⊕, ·), (W,⊕, ·) be two hypervector spaces on a hyper- skewfield K, if A = {α1, ...,αn} be a basis for V and β1, ...,βn be any vectors in W , then there is a unique linear transformation T : V −→ P∗(W) such that T(αi) = βi, 1 ≤ i ≤ n. In Other words, every linear transformation can be characterized by its op- eration on the basis of V . Proof. Since for every v ∈ V , there exists scalars c1, ...,cn ∈ K such that (∗) v ∈ n∑ i=1 ci ·αi then we define a map T : V −→ P∗(W) as follows: T(v) = ∑n i=1 ci ·T(αi) = ∑n i=1 ci ·βi Since (∗) is unique then T is well-defined. Now, we check that T is a linear transformation. Let v,w ∈ V and scalars d1, ...,dn ∈ K then v ∈ ∑n i=1 ci ·αi and w ∈ ∑n i=1 di · αi, then we have T(v) = ∑n i=1 ci · T(αi) and T(w) =∑n i=1 di ·T(αi). Now since v ⊕w ∈ ∑n i=1(ci ⊕di) ·αi, then we obtain T(v ⊕w) ⊆ T( ∑n i=1(ci ⊕di) ·αi) = ∑n i=1(ci ⊕di) ·T(αi) = ∑n i=1 ci·T(αi)⊕ ∑n i=1 di·T(αi) = T(v)⊕T(w). Also, it is clear that (c◦T)(α) = c◦T(α). Hence, T is a linear transforma- tion. Now, we shall check that T is unique. Let S : V −→ P∗(W) be another linear transformation that satisfies S(αi) = βi. We will show that S = T. We have S(α) = n∑ i=1 ci ·S(αi) = n∑ i=1 ci ·βi = n∑ i=1 ci ·T(αi) = T(α) So, S = T as desired. Remark 4.13. Let T : V −→ P∗(W) be a linear transformation. We denote KerT = {α ∈ V | 0 ∈ T(α)} by NT and by ImT we mean RT = {T(α)|α ∈ V}. We call dimension of RT , rank of T and it is denoted by R(T). Notice that NT is a subhypervector space of V and RT is a subhypervector space of W. Theorem 4.14. Let V,W be two hypervector spaces over a field K. Let T : V −→ P∗(W) be a linear transformation and dimV = n < ∞. Then dimRT + dimKerT = dimV 87 M. Motameni, R. Ameri, R. Sadeghi Proof. Let W = NT and let β1 = {α1, ...,αk} be a basis for W. We extend β1 to β2 = {α1, ...,αk,αk+1, ...,αn}. We will show that β = {T(αk+1), ...,T (αn)} is a basis for RT . Let c1, ...,cn be scalars in K such that 0 ∈ n∑ i=k+1 ci ·T(αi) then there exists γ ∈ ∑n i=k+1(ci · αi) such that 0 ∈ T(γ), this implies that γ ∈ KerT = NT , hence γ ∈ ∑k i=1(ci ·αi). Therefore 0 = γ −γ ∈ k∑ i=1 (ci ·αi) ⊕ n∑ i=k+1 ((−ci) ·αi) =⇒ ci = 0 Now, we claim that β generates RT because if for all α ∈ V we have T(α) = β, and since 0 ∈ ∑k i=1 ci ·T(αi), hence β = T(α) ⊆ T( ∑n i=1 ci ·αi) = ∑n i=1 ci ·T(αi) = ∑k i=1 ci ·T(αi) + ∑n i=k+1 ci · T(αi) = ∑n i=k+1 ci ·T(αi) Therefore, dimRT + dimNT = (n−k) + k = n = dimV . For all 1 6 j 6 n and 1 6 p 6 m, we define Cpj as the coordinator of T(αj) on the ordered basis B = {β1, ...,βp} which means T(αj) = m∑ p=1 Cpj ·βp where for Cpj = (cpj),βp = (βp1). Now, if we notice the following matrix with a crisp product and hypersum, we will have a hypermatrix as the following:  c11 ... c1p... ... ... cj1 ... cjp   ︸ ︷︷ ︸   β11... βp1   ︸ ︷︷ ︸ =   c11 ·β11 ⊕ ...⊕ c1p ·βp1... cj1 ·β11 ⊕ ...⊕ cjp ·βp1  =   T(α1)... T(αj)   ︸ ︷︷ ︸ Cpj βp Theorem 4.15. Let V,W be two hypervector spaces. If dimV = n and dimW = m, then dimL(V,W) = mn. Proof. Let A = {α1, ...,αn} and B = {β1, ...,βm} be the basis of V,W re- spectively. For all (p,q), where p,q ∈ Z, and 1 6 q 6 n, 1 6 p 6 m by Theorem 4.12 we have a unique linear transformation Tpq : V −→ P∗(W) which we define by Tpq(αi) = βp, when i = q and otherwise it is defined 0. Since we have mn linear transformation from V to P∗(W), it is sufficient to 88 Hypermatrix Based on Krasner Hypervector Spaces show that β ′′ = {Tpq|1 6 p 6 m, 1 6 q 6 n} is a basis for L(V,W). Let T : V −→ P∗(W) be a linear transformation. For all 1 6 j 6 n, let C1j, ...,Cmj be the coordinate of T(αj) in the ordered basis β́, i.e, T(αj) =∑m p=1 Cpj ·βp. We will show that T = ∑m p=1 ∑n q=1 Cpq ·Tpq generates L(V,W). Because if we suppose U = ∑m p=1 ∑n q=1 Cpq · Tpq, then if suppose i = q we obtain U(αj) = ∑m p=1 ∑n q=1 Cpq·Tpq(αj) = ∑m p=1 ∑n q=1 Cpq·βp = ∑m p=1 Apj·βp = T(αj). Otherwise it will be 0. Also, it is obvious that β ′′ is independent. Hence the result. Remark 4.16. Let T : V −→ P∗(W) and S : W −→ P∗(Z) be two linear transformations and α ∈ V , we define (S ◦T)(α) = S(T(α)) = ⋃ β∈T(α) S(β) then S ◦T is also a linear transformation. Definition 4.17. Let T : V −→ P∗(V ) be a linear transformation, we call T a linear operator (or shortly an operator) on V , and If we have T ◦T , we denote it by T 2. Lemma 4.18. let V be a hypervector space on a field K. If U,T,S be three operators on V and k ∈ K, then the following results are immediate: (i) I ◦U = U ◦ I = U; (ii) (S ⊕T) ◦U = S ◦U ⊕T ◦U, U ◦ (S ⊕T) = U ◦S ⊕U ◦T ; (iii) k ⊕ (U ◦T) = (kU) ◦T = U ◦ (kT ).� Example 4.19. Let β = {α1, ...,αn} be an ordered basis for the hypervector space V . Consider the operators T(p,q) regarding the proof of Theorem 4.15. These n2 operators construct a basis for the space of operators of V . let S,U be two operators on V then we have S = ∑ p ∑ q Cpq ·Spq, U = ∑ r ∑ s Brs ·Srs. Now by lemma 4.18, we have (S ◦U)(αi) = S(U(αi)) = ⋃ β∈U(αi) S(β) = ⋃ β∈Σr ΣsBrs·Trs(αi) S(β) = S( ∑ r ∑ s Brs ·T(r,s)(αi)) = S( ∑ r ∑ s Brs ·αr) when i = s we have∑ r ∑ s Bri ·S(αr) = ∑ r ∑ s Brs · ( ∑ p ∑ q Cpq ·Tpq(αr)) = ∑ r ∑ s ∑ p ∑ q(BriApq) ◦αp and when r = q we have = ∑ r ∑ s ∑ p ∑ q(BriCpr) ·αp = ∑ r ∑ s ∑ p ∑ q(CprBri) ·αp and since 1 6 i 6 P then we have ∑ r ∑ s ∑ p ∑ q(BC)n2 ·αi Hence when we compose two operators S and U, the result is obtained by multiplying two matrices of them.� 89 M. Motameni, R. Ameri, R. Sadeghi Now, it is time to talk about the inverse of a transformation. As it is usual for defining an inverse we have: Definition 4.20. let T : V −→ P∗(W) be one to one and onto. T is said to have an inverse when there exists U : W −→ P∗(V ) such that T ◦ U = IV and U ◦T = IW . Also, the inverse of T is denoted by T−1 and obviously is not unique. We have (U ◦T)−1 = T−1 ◦U−1. We say that a linear transformation T is called nonsingular if 0 ∈ T(α) implies that α = {0}, which means that the null space of T is equal to {0}. Lemma 4.21. Let T : V −→ P∗(W) be a linear transformation then T is one to one if and only if T is nonsingular if and only if KerT = 0 Proof. Let T be one to one and suppose 0 ∈ T(α), then since T(0) = 0, we have T(0) ∈ T(α) then T(0) ∈ T(α + 0) ⊆ T(α) + T(0) =⇒ T(α) ∈ T(0) + (−T(0)) = 0 hence α = 0. Conversely, let T is nonsingular and suppose for x,y ∈ V , we have T(x) = T(y) then, 0 ∈ T(x) − T(y) = T(x − y) and since T is nonsingular we obtain x−y = 0, which means x = y. Now let for all α ∈ KerT we have 0 ∈ T(α), then since T is nonsingular we obtain α = 0 which means KerT = 0. Conversely, if KerT = 0, then suppose 0 ∈ T(α) implies that α ∈ KerT = 0, hence α = 0. Theorem 4.22. Let V,W be two hypervector spaces on a hyperfiled K and let T : V −→ P∗(W) be a linear transformation. If T is good reversible linear transformation, then the reverse of T is also a good linear transformation. Proof. Let w1,w2 ∈ W and k ∈ K, then there exists v1,v2 ∈ V such that T−1(w1) = v1,T −1(w2) = v2, where T(v1) = w1, and T(w2) = v2. We have T−1(w1 ⊕ w2) = T−1(T(v1) ⊕ T(v2)) ⊇ T−1(T(v1 ⊕ v2)) = v1 ⊕ v2 = T−1(w1) ⊕T−1(w2) and when T is a good linear transformation, T−1 is also a good linear trans- formation. Theorem 4.23. Let T : V −→ P∗(W) be a linear transformation. T is nonsingular if and only if T corresponds every linearly independent subset of V onto a linearly independent subset of W . Proof. Let T be nonsingular and S be a linearly independent subset of V . We show that T(S) is independent. Let śi ∈ T(S) and for all i there exists si ∈ S such that T(si) = śi. We assume∑n i=1 ci · śi = 0 =⇒ ∑n i=1 ci ·T(si) = 0 =⇒ T( ∑n i=1 ci ·si) = 0 because T is nonsingular we have ∑n i=1 ci · si = 0, and since si, for all i are 90 Hypermatrix Based on Krasner Hypervector Spaces linearly independent then ci = 0, hence T(S) is linearly independent. Conversely, let 0 6= α ∈ V , then {0} is an independent set. Hence by hypothesis T corresponds this independent set to a linearly in dependent set such as T(α) ∈ P∗(W), then we have T(α) 6= 0. Therefore, T is nonsingular. Theorem 4.24. Let V,W be two hypervector spaces with finite dimension on a hyperskewfield K and dimV = dimW . If T : V −→ P∗(W) is a linear transformation, then the followings are equivalent: (i) T is reversible; (ii) T is nonsingular; (iii) T is onto. (iv) If {α1, ...,αn} is a basis for V , then {T(α1), ...,T(αn)} is a basis for W . (v) There exists a basis like {α1, ...,αn} for V such that {T(α1), ...,T(αn)} is a basis for W . Lemma 4.25. let V be a hypervector space with finite dimension on a hy- perfield K, then V ∼= Kn. 5 Fundamental Relations Let (V,⊕, ·) be a hypervector space, we define the relation ε∗ as the small- est equivalence relation on V such that the set of all equivalence classes,V/ε∗, is an ordinary vector space. ε∗ is called fundamental equivalence relation on V and V/ε∗ is the fundamental ring. Let ε∗(v) is the equivalence class containing v ∈ V , then we define � and � on V/ε∗ as follows: ε∗(v) �ε∗(w) = ε∗(z), for all z ∈ ε∗(v) ⊕ε∗(w) a�ε∗(v) = ε∗(z), for all z ∈ a ·ε∗(v), a ∈ K Let U be the set of all finite linear combinations of elements of V with coefficients in K, which means U = { ∑n i=1 ai ·vi; ai ∈ K, vi ∈ V, n ∈ N} we define the relation ε as follows: vεw ⇐⇒∃u ∈ U;{v,w}⊆ u Koskas [4] introduced the relation β∗ on hypergroups as the smallest equivalence relation such that the quotient R/β∗+ is a group. We will denote β+ the relation in R as follows: vβ+w ⇐⇒∃(c1, ...,cn) ∈ V n such that {v,w}⊆ c1 ⊕ ...⊕ cn 91 M. Motameni, R. Ameri, R. Sadeghi Freni proved that for hyperrings we have β∗+ = β+. Since in here (V,⊕) is a canonical hypergroup the we will have: Theorem 5.1. In the hypervector space (V,⊕, ·), we have ε∗ = β∗+. Vougiouklis [9] has proved that the sets {ε∗(z) : z ∈ ε∗(v) ⊕ ε∗(w)} and {ε∗(z) : z ∈ a · ε∗(v)} are singletons. With a similar method we can prove the following theorem: Theorem 5.2. Let (V,⊕, ·) be a hypervector space, then for all a ∈ K , v,w ∈ V , we have the followings: (i) ε∗(v) �ε∗(w) = ε∗(z), ∀z ∈ ε∗(v) ⊕ε∗(w) a�ε∗(v) = ε∗(z), ∀z ∈ a ·ε∗(w) (ii)ε∗(0V ) is the zero element of (V/� ∗,�). (iii) (V/ε∗,�,�) is a hypervector space and is called the fundamental hy- pervector space of V . Proof. (i) The proof is the same as [9], and we omit it. (ii) since from (i) we obtain ε∗(v)�ε∗(w) = ε∗(v⊕w) and a�ε∗(v) = ε∗(a·v) we have ε∗(v) � �∗(0) = ε(v ⊕ 0) = ε∗(v) (iii) The conditions for the vector space (V/�∗,�,�) will be obtained from the hypervector space (V,⊕, ·). Theorem 5.3. Let (V,⊕, ·,K) be a hypervector space and (V/�∗,�,�) be the fundamental relation of it then dimV = dimV/�∗. Proof. Let B = {v1, ...,vn} be a basis for V . We show that the set B∗ = {ε∗(v1), ...,ε∗(vn)} is a basis for V/�∗. For this let ε∗(v) ∈ V/ε∗, then for every v ∈ V there exists a1, ...,an ∈ K such that x ∈ ∑n i=1 ai · vi, then v = t1 ⊕ ... ⊕ tn, where ti = ai · vi, i ∈ {1, ...,n}. Now by Theorem 5.2 we have ε∗(ti) = ai ·ε∗(vi) then ε∗(v) = ε∗(t1⊕....⊕tn) = ε∗(t1)�....�ε∗(tn) = (a1�ε∗(v1))�(an�ε∗(vn)). hence, V/ε∗ is spanned by B∗. Now we show that B∗ is linearly independent. For this let (a1 �ε ∗(v1)) � ...� (an �ε ∗(vn)) = ε ∗(0) =⇒ ε∗(a1 ·v1) � ...�ε∗(an ·vn) = ε∗(0) =⇒ ε∗(a1 ·v1 ⊕ ...⊕an ·vn) = ε∗(0) =⇒ 0 ∈ a1 ·v1 ⊕ ...⊕an ·vn since B in linearly independent in V , then a1 = ... = an = 0. Therefore, B ∗ is also linearly independent. 92 Hypermatrix Based on Krasner Hypervector Spaces Lemma 5.4. Let V , W be two hypervector spaces and T : V −→ P∗(W) be a linear transformation, then (i) T(ε∗(v)) ⊆ ε∗(T(v)), for all v ∈ V ; (ii) The map T∗ : V/ε∗ −→ W/ε∗ defined as T∗(ε∗(v)) = ε∗(T(v)) is a linear transformation. Proof. (i) straightforward. (ii) It is obvious that T∗ is well defined. Now we show that T∗ is a linear transformation. Let a ∈ K, x,y ∈ V , then by Theorem 5.2 we have T∗(ε∗(x) � ε∗(y)) = T∗(ε∗(x ⊕ y)) = ε∗(T(x ⊕ y)) ⊆ ε∗(T(x) ⊕ T(y)) = ε∗(T(x)) �ε∗(T(y)) = T∗(ε∗(x)) �T∗(ε∗(y)) and T∗(a�ε∗(x)) = T∗(ε∗(a·x)) = ε∗(T(a·x)) = ε∗(a·T(x)) = a�ε∗(T(x)) = a�T∗(ε∗(x)) hence, T∗ is a linear transformation. Theorem 5.5. The map F : HV −→ V defined by F(V ) = V/ε∗ and F(T) = T∗ is a functor, where HV and V denote the category of hypervector spaces and vector spaces respectively. Moreover, F preserves the dimension. Proof. By Lemma 5.4 F is well-defined. LetT : V −→ P∗(W) and U : W −→ P∗(Z) be two linear transformations, then F(U ◦T) = (U ◦T)∗ such that for all v ∈ V we have (U ◦T)∗(ε∗(v)) = ε∗((U ◦T)(v)) = ε∗(U(T(v))) = U∗ε∗(T∗(x)) = U∗T∗(ε∗(x)) = F(U)F(T)(ε∗(v)) =⇒ F(U ◦T) = F(U)F(T) Also, the identity is F(1∗V ) : V/ε ∗ −→ V/ε∗ such that 1∗V (ε ∗(v)) = ε∗(v). Hence, F is a functor And by Theorem 4.14 we have dim(F(V )) = dim(V/ε∗) = dim(V ). Theorem 5.6. Let T : V −→ P∗(W) be a liner transformation in HV. Then the following diagram is commutative: V T−→ W ϕV ↓ ↓ ϕW V/ε∗ T ∗ −→ W/ε ∗ where βV ,βW are the canonical projections of V and W . Proof. Let v ∈ V then ϕW (T(v)) = ε∗(T(v)) = T∗(ε∗(v)) = T∗(ϕV (v)) = T∗ϕV (v). Hence, the diagram is commutative. Acknowledgement. The first author has been financially supported by ” Office of vice chancel- lor of research and technology of islamic azad university-Qaemshahr branch”. 93 M. Motameni, R. Ameri, R. Sadeghi References [1] R. Ameri and O. R. Dehghan, On Dimension of Hypervector Spaces, European J. Pure Appl. math., Vol. 1, No. 2, 32–50, (2008). [2] P. Corsini, Prolegomena of Hypergroup Theory, second edition, Aviani editor (1993). [3] P. Corsini and V. Leoreanu, Applications of Hyperstructure Theory, Kluwer Academic Publishers, (2003). [4] M. Koskas, Groupoids, demi-groupes et hypergroups, J. Math. Pures Appl. 49 155–192 (1970). [5] F. Marty, Sur une generalization de la notion de groupe, 8th Congress des Mathematiciens Scandinaves, Stockholm, 45–49 (1934). [6] M. S. Tallini, Hypervector Spaces, 4th Int. Cong. AHA, 167–174 (1990). [7] M. S. Tallini, Weak Hypervector Spaces and Norms in such Spaces, Algebraic Hyperstructures and Applications, Hadronic Press, 199–206 (1994). [8] T. Vougiouklis, Hyperstructures and their representations, Hadronic, Press, Inc. (1994). [9] T. Vougiouklis, HV Vector Spaces, 5 th Int. Cong. on AHA, Iasi, Roma- nia, 181–190, 1994. 94