IBN AL- HAITHAM J. FO R PURE & APPL. SC I. VO L.24 (1 ) 2011 N – Topological Space and Its Applications in Artificial Neural Networks L.N.M.Tawfiq and R.N.Majeed Departme nt of Mathematics, College of Education -Ibn Al-Haitham, Baghdad Unive rsity. Recei ved in March,16,2009 Accepted in March,29,2010 Abstract In this p aper we give definitions, p rop erties and examp les of the notion of ty p e N- top ological sp ace. Throughout this p aper N is a finite p ositive number, N 2. The task of this p aper is to st udy and invest igate some p rop erties of such sp aces with the existence of a relation between this sp ace and artificial Neural Networks (NN'S), that is we app lied the definition of this sp ace in comp uter field and sp ecially in parallel p rocessing. Introduction Finite sp aces were first st udied by P.. lexandroff in 1937. ctually , finite sp aces had been more earlier invest igated by many authors under the name of simplicial comp lexes. There were several other contributions by Flachasmey er in 1961, Stong in 1966 and L.Lotz in 1970, in this p aper we define and st udy the notion of N- top ological sp ace and discuss some p rop erties of finite sp aces. However, the subject has never been considered as a main field of top ology . With the p rogress of comp uter technology , finite sp aces have become more imp ortant. Herman in1990, Khalimsky and et. al. in 1990, kong and Kop p erman in 1991 and [1] have app lied them to model the computer screen. In this p aper we focus on N- top ological sp ace, The main imp ortance of study is to offer new formulations for separation axioms in N- top ological sp ace. We p resent and st udy comp arisons between N- top ological sp ace and nn'a in the case of finite sp aces. 2. asic De finition of N- topological space and their properties In this section we introduce the notion of N- top ological sp ace, and give its p rop erties. Several of the classical results [2] are extended by defining app rop riate substructures on the N- top ological sp ace. Examp les are given to illustrate these structures. Definition 2.1 Le t ( X, ) be a non empt y spa ce with N different topol ogy. { X, } is ca ll ed " N- topological space " if there exi sts N prope r subs pa ce of X such tha t : IBN AL- HAITHAM J. FO R PURE & APPL. SC I. VO L.24 (1 ) 2011 1. X = 2. = is a subsp ace of ( , where i =1,2,…,N . Example 2.2 Let ( {1, 2, 3}, ) be 3 – top ological sp ace where ={X, when X= {1,2,3}, ={X, {1}, } and ={X,{2}, } Let ={1}, ={2} and ={3}.It clear that X= and = {{1}, }, ={{ 2}, }, = {{3}, }. It is clear that ( , is a top ological sp ace of (X for i=1,2,3. Now, we give the definition of open set in N- top ological sp ace. Definition 2.3 A subset U of N- top ological sp ace ( X, ) is said to be an "N- open set " if and only if it is op en in , for some i = 1,2,…,N. Definition 2.4 The complement of N- op en set in N- top ological sp ace ( X , ) is said to be an "N-closed se t ". Remark 2.5 1- Every op en set in top ological sp ace (X is N- op en set, for all i= 1,2,…,N, But the converse is not true, (see the following Examp le 2.6). 2- Every closed set in top ological sp ace (X is N- closed set, for all i=1,2,..,N But the converse is not true, (see the following Examp le 2.6). 3- Every op en set in subsp ace for all i= 1,2,…,N need not t o be N- op en set in N- top ological sp ace ( X, ), only if the subsp ace is op en in (X , ( see the following Examp le 2.6 ). Example 2.6 Let be 4-top ological sp ace (where is the set of natural numbers) such that And let X1, X2, X3, X4 be four subsp ace of X such that: X1= {1} imp lies = X2= {2} imp lies = X3= {3} imp lies = X4= {4, 5, 6, 7, …} imp lies = . It is clear that X= X1 X2 X3 X4 Now, to show the converse of p art (1) is not true, let {1} is N- op en set in but it is not op en set in each (X ; i= 1,2,…,N. Also, t o show the converse of p art (2) is not true, let {2, 3, 4, 5, …} is N- closed set but it is not closed set in each (X ; i= 1,2,…,N. And also not ice that the subsp ace X1= {1} is op en in imp lies that each open set in is N- op en set in But in the other hand notice that the subsp ace X3= {3} is not op en in (X3, imp lies that each op en set in (X3, need not to be N- op en set in Next, we give a definition about sub N- top ological sp ace. Definition 2.7 Let ( X, ) be N- top ological sp ace ( ) . The subsp ace Y( ) of X is called a sub N- top ological sp ace of X if and only if IBN AL- HAITHAM J. FO R PURE & APPL. SC I. VO L.24 (1 ) 2011 ( i ) There exists N p rop er subsp ace ,…, of Y such that and (Y, is sub-N-top ological sp ace of ( X, ), that is = Y  . (ii) = is subsp ace of (Y , Eexample 2.8 Let (X, ) is 3 – top ological sp ace where X={1,2,3,4,5,6,7}, ={X, ={X, , {2,4,6}}, and ={ X, {3,7}} and let Let Y ={2,3,5,6}, = Y ={Y,  }, = Y = {Y,  , {2,6}}, = Y = {Y,  , {3}}. It is clear that (Y, is sub 3- t op ological sp ace of (X, ) where , ={2,3} , ={5} ={6} and = = = ={ , Now, we introduce definitions and examp les about sep aration axioms in N- top ological sp ace. Definition 2.9 An N- top ological sp ace ( X, ) is said to be an "N- -space " if and only if for each p air of distinct p oints x, yX, there exists N- op en set U of X such that xU and yU. Proposition 2.10 An N- top ological sp ace ( X, ) is N- –sp ace if (X is –sp ace for some i = 1,2,…,N. Proof To p rove ( X, ) is N - - sp ace, we must p rove for any x, yX such that x≠ y , there exists N-op en set U of X such that xU and yU. Now, let x, yX; x≠ y since there exists i1,2,…,N such that (X is –sp ace, imp lies there exists op en set U in such that xU and yU, t herefore  N-op en set U of X such that xU and yU(by Definition 2.3). Thus ( X, ) is N - - sp ace. ■ Remark 2.11 The converse of (Prop osition 2.10) is not true, to see this, let ({1, 2, 3}, be 3- top ological sp ace in (Examp le 2.2) , then the sp ace ({1,2,3}, is 3- – sp ace, but each (X is not - sp ace. Re mark 2.12 If ( X, ) is N - - sp ace then need not to be - sp ace for all i= 1,2,…,N, only if (X is sp ace. Theorem 2.13 Let ( X, ) is N - - sp ace and (Y, is a sub N- top ological sp ace of the N- top ological sp ace ( X, ). Then(Y, is also N- - sp ace. Proof To p rove (Y, is N- - sp ace, we must p rove :x, y Y, x≠ y , U for some i , such that xU and yU. IBN AL- HAITHAM J. FO R PURE & APPL. SC I. VO L.24 (1 ) 2011 Now, let x, y Y, x≠ y imp lies x, y X. T hen, there exists W for some i , such that (x W  y W) or (x W  y W) since ( X, ) is N – - sp ace. Then Y W for some i (by definition of So, xYW x  Y W and yW y Y W Or x W  x  Y W and y YW  y  Y W Then (Y, ) is - sp ace. Then (Y, is N- - sp ace. ■ Example 2.14 Let ( {1, 2, 3, 4}, ) is 3- - sp ace where ={X, , ={X, , {2}, {4},{2,4}} and ={ X, {2,4},{3,4},{4},{2,3,4}}. And let Y ={1,3,4}X , Then (Y, is sub 3- - sp ace where ={Y,  }, ={Y,  ,{4}}, ={Y,  ,{4},{3,4}}. Definition 2.15 An N- top ological sp ace ( X, ) is said to be an "N- -space " if and if for each pair of distinct p oints x, yX, t here exists t wo N- op en sets U and V of X such that xU  yU and xV  yV. Proposition 2.16 An N- top ological sp ace ( X, ) is N- -sp ace if (X is –sp ace for some i = 1,2,…,N. Proof To p rove ( X, ) is N - - sp ace, we must p rove for any x, yX such that x≠ y , there exists t wo N-op en sets U and V of X such that xU  yU and xV  yV. Now, let x, yX; x≠ y since there exists i1,2,…,N such that (X is – sp ace, imp lies there exists two op en sets U and V in such that xU  yU and xV  yV, therefore there exists two op en sets U and V of X such that xU  yU and xV  yV( by Definition 2.3). Thus ( X, ) is N- –sp ace. ■ Remark 2.17 The converse of ( Prop osition 2.16 ) is not true, as the following examp le: Example 2.18 Let X= {1,2,3,…,n}; n and let ={X, ; i=1,2,3,…,n, then the sp ace ( X, ) is n-top ological sp ace, notice that (X, ) is N- -sp ace but each (X, is not –sp ace, for all i. Note 1. It is clear that each N- - sp ace is also, N- - sp ace but the converse is not true see (Examp le 2.14), the sp ace is 3- - sp ace but not 3- - sp ace. 2. If N- top ological sp ace is not N - sp ace, then it is not N - - sp ace. Theorem 2.19 Let ( X, ) is N - - sp ace and (Y, is sub N- top ological sp ace of ( X, ). Then(Y, is N- - sp ace. Proof To p rove (Y, is N- - sp ace, we must p rove :x, y Y, x≠ y , U,V for some i , such that xU yU and x  V yV. IBN AL- HAITHAM J. FO R PURE & APPL. SC I. VO L.24 (1 ) 2011 Now, since Y X, t hen x, y X and X is N - - sp ace, then   for some i , such that x  y and x  y . Then Y , Y for some i , (by definition of . So x Y  y Y and x Y  y Y . Then (Y, ) is - sp ace for some i , then (Y, is N- - sp ace. ■ Definition 2.20 An N- top ological sp ace ( X, ) is said to be an "N- -space " if and if x, y X x≠ y ,U,V N-op en sets of X, such that UV= , xU  y V. Proposition 2.21 An N- top ological sp ace ( X, ) is N- -sp ace if (X is –sp ace for some i = 1, 2, …, N. Proof To p rove ( X, ) is N - - sp ace, we must p rove for any x, yX such that x≠ y , there exists t wo N-op en sets U and V of X such that UV= , xU  y V. Now, let x, yX; x≠ y since there exists i1,2,…,N such that (X is – sp ace, imp lies there exists two op en sets U and V in such that UV= , xU  y V, therefore there exists t wo op en sets U and V of X such that UV= , xU  y V(by Definition 2.3 ). Thus ( X, ) is N- –sp ace. ■ Remark 2.22 The converse of (Prop osition 2.21) is not true, to see that consider the n-top ological sp ace ( X, ) in( Examp le 2.18), which is n- -sp ace but each (X, is not –sp ace, for all i. Note 1. It is clear that each N - - sp ace is also, N- - sp ace , so is N - - sp ace but the converse is not true. 2. If N- top ological sp ace is not N - - sp ace, then it is not N - - sp ace, then it is not N - - sp ace. Theorem 2.23 Let ( X, ) is N - sp ace and (Y, is sub N- top ological sp ace of ( X, ). Then (Y, is N- - sp ace. Proof The proof of this theorem is similar of the p roof of theorem 2.19 ■ 3. Application of N- topological space in NN 'S 3.1. What are Artificial Neural Networks? An Art ificial Neural Network (ANN) is an information p rocessing p aradigm that is insp ired by the way biological nervous sy st ems, such as the brain, p rocess information. The key element of t his p aradigm is t he novel st ructure of the information p rocessing sy st em. It is comp osed of a large number of highly interconnected p rocessing elements ( neurons ) working in unison t o solve sp ecific problems. ANN's, like people, learn by examp le. An ANN is configured for a sp ecific app lication, such as p att ern recognition or data classification, through a learning p rocess. Learning in biological sy st ems involves adjust ments to the sy naptic connections t hat exist between the neurons. T his is true of ANN's as well.[3] That is Art ificial Neural Networks are relatively crude electronic models based on the neural IBN AL- HAITHAM J. FO R PURE & APPL. SC I. VO L.24 (1 ) 2011 st ructure of the brain. The brain basically learns from exp erience. It is natural proof that some p roblems that are beyond the scop e of current comp uters are indeed solvable by small energy efficient p ackages. This brain modeling also p romises a less technical way to develop machine solutions. This new app roach to comp uting also p rovides a more graceful degradation during sy st em overload than its more traditional counterparts. These biologically insp ired methods of comp uting are thought to be the next major advancement in the comp uting industry . Even simple animal brains are cap able of functions that are currently imp ossible for comp uters. Computers do rote things well, like keep ing ledgers or p erforming comp lex math. But comp uters have trouble recognizing even simple p att erns much less generalizing those p att erns of the past into actions of t he future. 3.2. Artificial Network Operations The other p art of t he "art" of using neural networks revolves around the myriad of way s these individual neurons can be clust ered together. This clust ering occurs in the human mind in such a way that information can be p rocessed in a dynamic, interactive, and self-organizing way . Biologically, neural networks are constructed in a three-dimensional world from microscopic comp onents. These neurons seem cap able of nearly unrestricted interconnections. T hat is not true of any p rop osed, or existing, man-made network. Integrated circuits, using current technology , are two-dimensional devices with a limited number of lay ers for interconnection. This p hy sical reality restrains the ty p es, and scop e, of artificial neural networks that can be imp lemented in silicon.[3] Currently , neural networks are the simple clust ering of the p rimitive artificial neurons. This clust ering occurs by creating lay ers which are then connected to one another. How these layers connect is t he other part of the "art" of engineering networks to resolve real world p roblems. Basically, all artificial neural networks have a similar st ructure or top ology as shown in Figure (1). In that st ructure some of the neurons interfaces to the real world to receive its inputs. Other neurons p rovide the real world with the networks outp uts. This outp ut might be the p articular character that t he network t hinks that it has scanned or the p articular image it t hinks is being viewed. All the rest of the neurons are hidden from view. But a neural network is more than a bunch of neurons. Some early researchers tried to simply connect neurons in a random manner, without much success. Now, it is known that even the brains of snails are st ructured devices. One of the easiest way s to design a st ructure is to create lay ers of elements. It is the group ing of these neurons into lay ers, the connections between these lay ers, and the summation and transfer functions that comp rise a functioning neural network. The general terms used to describe these characterist ics are common to all networks. Although there are useful networks which contain only one lay er, or even one element, most app lications require networks that contain at least the three normal ty p es of lay ers - input, hidden, and outp ut. The lay er of input neurons receives the data either from input files or directly from electronic sensors in real-time app lications. IBN AL- HAITHAM J. FO R PURE & APPL. SC I. VO L.24 (1 ) 2011 The outp ut lay er sends information directly to the outside world, to a secondary comp uter p rocess, or to other devices such as a mechanical control sy st em. Between these two lay ers, there can be many hidden lay ers. These internal lay ers contain many of the neurons in various interconnected st ructures. The inp uts and outp uts of each of these hidden neurons simp ly go to other neurons. In most networks each neuron in a hidden lay er receives the signals from all of the neurons in a lay er above it, ty p ically an input lay er. Aft er a neuron p erforms its function it p asses its outp ut to all of the neurons in the lay er below it, p roviding a feed forward p ath t o the outp ut. These lines of communication from one neuron to another are imp ortant asp ects of neural networks. T hey are the glue to the sy st em. They are the connections which provide a variable st rength to an input. There are two ty p es of these connections. One causes the summing mechanism of the next neuron to add while the other causes it to subt ract. In more human terms one excites while the other inhibits. Some networks want a neuron to inhibit the other neurons in the same lay er. This is called lateral inhibition. The most common use of this is in the outp ut lay er. For examp le in text recognition if the p robability of a character being a "P" is .85 and the p robability of the character being an "F" is .65, the network wants to choose the highest p robability and inhibit all the others. It can do that with lateral inhibition. This concept is also called comp etition. 3.3. A Relation Between ANN's and N- Topological space NN's have been developed as generalizations of mathematical models of human cognition or neural biology , and is characterized by : 1. It is a patt ern of connections between the neurons and the lay ers (called top ology of network ). 2. It is a method of determining the weights on the connections (called it's training, or learning algorithm ). 3. It is activation function.  neural network consists of a number of simple p rocessing elements called neurons, these neurons consist in many lay er. The numbers of neurons and lay ers in the NN's differ from not work t o network and this is called the top ology of the network . The NN's with multilayer is not well understood [4]. Some authors [5] see that little theoretical gain in using more than one hidden lay er since a single hidden lay er model suffices for density . In this p aper, we introduce the definition and p rop erties of N- top ological sp ace which can be ap p lied to NN's with more than one hidden layer. One imp ortant advantage of the multiple lay er model ( N- top ological sp ace (see figure (2 ) ) has to do with the existence of locally sup p orted functions in the two hidden layer model ( 4- top ological sp ace ) since for any activation function For every and thus g(x) defined above has no comp act sup p ort.(see [6], [7] for amore detailed discussion ). nother advantage of the multilay er model (N- top ological sp ace ), there is a lower bound on the degree to which the single hidden lay er model ( 3- top ological sp ace ) with r neuron units in the hidden lay er can app roximate any function. It is given by the extent to which a linear combination of r activation function can app roximate this same function, and, more imp ortantly , activation function app roximate itself is bounded below (a way from zero )with some non- trifling dependence on r and on the set to be app roximated. In the single hidden lay er model (3- top ological sp ace ) there is an intrinsic lower bound on the degree of IBN AL- HAITHAM J. FO R PURE & APPL. SC I. VO L.24 (1 ) 2011 app roximation depending on the number of neuron units used. This is not the casa in the two hidden lay er model, (4- top ological sp ace ) . Finally, we can show, using the kolmogorov sup er p osition theorem [7] that a finite number of units in both hidden lay ers (4- top ological sp ace ) is sufficient to app roximate arbitrarily well any continuous function. Theorem 3.4 There exists an activation function  which is C  , st rictly increasing, and sigmoidal, and has the following p rop erty . For any f  C[0,1] n and  > 0, there exist constants di, cij, ij, i and vectors Wij  R n , for which:               12 1 12 1 )()( n i n j iijxijWij cidxf  < , for all x [0, 1] n . Proof Let f be any continuous function on [0,1] n ,  > 0 ,then by Kolmogorov Sup erposition theorem , there exist constants ci, ij and vectors Wij  R n , such that     1n2 1i ) ij sx ij v(Φ i d)x(f </(2n+1) ……..(1) since ф is continuous function ,and by restrict ф in [0,1] n can represent ф such that     1n2 1j ) ij sx ij r( ij c)x( …………(2) By substitut ing (2) in (1) , we obtain :               1n2 1i 1n2 1j ) ij L) ij sx ij v( ij r( ij c i d)x(f <  Then,               1n2 1i 1n2 1j i ) ij x ij W( ij c i d)x(f <  □ Now, we introduce the following definition : Definition 3.5 A set of functions is said to be fundamental in a given sp ace if a linear combinations of them are dense in that sp ace. Theorem 3.6 Let K be a compact set in R n . Then the set E of functions of the form (x)  exp (a T x) , where a  R n , is fundamental in C(K). Proof By the Stone-Weierstrass t heorem we need only show that t he set forms an algebra and sep arates p oints. Sup p ose x  K. First, we have: exp (a T x) exp (b T x)  exp (a T x + b T x)  exp ((a T + b T ) x ). The set also contains the function “1” simp ly choose a  0. This establishes that E is an algebra. It remains t o show that E sep arates the points of K. So let x, y  K with x  y . Set a  (x  y ). Then a T (x  y )  0, so a T x  a T y . Thus exp (a T x)  exp (a T y ). IBN AL- HAITHAM J. FO R PURE & APPL. SC I. VO L.24 (1 ) 2011 The proof is comp lete. Before considering more constructive versions of this result we comp lete the density p roof. Theorem 3.7 Let K be a compact set in R n . Then the set F of functions of t he form g(x), defined by: g(x)     k 1j j T jj )cxW(v ,with  as a continuous sigmoidal function is dense in C(K). Proof Let f  C(K). For any  > 0, there exists (by theorem 3.6) a finite number m of vectors ai, such that:   2 xaexpf m 1i T i    since there are only m scalars xa T i , we may find a finite interval including all of them. Thus there exists a number  such that exp ( xaTi )  exp (y ) , where y  ( xaTi / )  [0,1]. Then theorem 3.6 tells us that the function exp (y ) can be app roximated by linear combinations functions of the form ( T jW x + cj ) with a uniform error less t han /2m, from which the desired result easily follows. ■ Remark s 1. Theorem 3.6 tells us one hidden lay er is sufficient to app roximate any continuous function to any required accuracy . 2.  in the p roof of theorem 3.7 can be chosen to be an integer . 3. The question of rate of convergence of app roximations is obviously of considerable imp ortance. If f is smooth and we use smooth app roximating functions we might hop e to get better convergence than the simple O(1/n) . 4. Conclusions 1. We define N- top ological sp ace and give some examp les and prop erties about t his notion. 2. We give app lication of N- top ological sp ace in NN's, then we obtain : (i ) Increasing number of hidden units leads to decreasing number of epoch of training. ( ii) Large number of hidden units leads to a small error on the training set but not necessarily leads to a small error on the test set. ( iii) If we fix the number of basis functions and increase the number of the lay ers of the NN's (that's increase N in the N- top ological sp ace ), then we get an accurate numerical solution. Re ferences 1. EL – Fattah El- tik, . ; bd El-onasef, . E. and Lashin, E.I. (2002) "On finite to top ological asp aces, Ninth p rague top ological sy mposium, top ology atlas, Toronto, PP.75- 90. 2. Wilkins, D.R. (1998-1999)," Top ological sp ace ", cademic Year. 3. Stergiou, C. and Siganos, D. (2008) NEURAL NETWORKS , IEEE Neural Networks Council IBN AL- HAITHAM J. FO R PURE & APPL. SC I. VO L.24 (1 ) 2011 4. Tawfiq, L.N.M . (2007), Density and Ap p roximation by using Feed Forward Art ificial Neural Networks, Ibn Al – Haithem journal for app lied and p ure science,.20, 1. 5. Dijkstra, E.W. (2001) "p p roximation with rtificial Networks", sc thesis, Faculty of mathematics and computing science Eindhoven university of technology , The Netherlands. 6. .ason, J.C. and Parks, P.C. (2004) "Selection of neural network st ructures some app roximation theory guidelines, IEE Control engineering series, 46:151-180. 7. Ellacott , S.W. (1994), "sp ects of the numerical analysis of neural networks", cta numerical, 145-202. 8. Kurkova ,V. (1991)"Kolmogorov's theorem is relevant , 1," Neural comp utation 3: 617- 622. Fig. (1) A S imple Ne ural Ne twork Di agram. Fig. (2) Graph of a mul tilayer(N-Topological space)ne ural ne twork with se que nti al conne ctions 2011) 1( 24مجلة ابن الھیثم للعلوم الصرفة والتطبیقیة المجلد وتطبیقاتها في الشبكات العصبیة الصناعیة Nالفضاءات التبولوجیة ــ رشا ناصر مجید لمى ناجي محمد توفیق و جامعـة بغـداد،أبن الهیثم -كلیة التربیة،قسم الریاضیات 20 آذار 16استلم البحث في 09 20 آذار 29قبل البحث في 10 الخالصة فـي ،N إن إذا . Nخواص وأمثلة حـول مصـطلح الفضـاءات التبولوجیـة مـن النـوع ـ في هذا البحث أعطینا تعار یف إن فكــرة هـذا البحـث هـو دراســة وتقـدیم بعـض خــواص هـذه الفضـاءات مــع . ، هـذا البحـث هـو عــدد منتهـي موجـب دراسة العالقة بین هذه الفضاءات والشبكات العصـبیة الصـناعیة وهـذا یعنـي تطبیـق تعریـف هـذا الفضـاء فـي حقـل الكومبیـوتر .في مجال المعالجات المتوازیة السیماو