CUBO A Mathematical Journal Vol.14, No¯ 03, (71–83). October 2012 Fractional Voronovskaya type asymptotic expansions for quasi-interpolation neural network operators George A. Anastassiou Department of Mathematical Sciences, University of Memphis, Memphis, TN 38152, U.S.A. email: ganastss@memphis.edu ABSTRACT Here we study further the quasi-interpolation of sigmoidal and hyperbolic tangent types neural network operators of one hidden layer. Based on fractional calculus theory we derive fractional Voronovskaya type asymptotic expansions for the error of approxima- tion of these operators to the unit operator. RESUMEN Estudiamos la cuasi-interpolación de los operadores de redes neuronales de tipo tan- gencial hiperbólico y sigmoidal de una capa oculta. Basados en la Teoŕıa del Cálculo Fraccional, obtenemos expansiones asintóticas del tipo Voronovskaya para el error en la aproximación de estos operadores hacia el operador unitario. Keywords and Phrases: Neural Network Fractional Approximation, Voro- novskaya Asymptotic Expansion, fractional derivative. 2010 AMS Mathematics Subject Classification: 26A33, 41A25, 41A36, 41A60. 72 George A. Anastassiou CUBO 14, 3 (2012) 1 Background We need Definition 1. Let ν > 0, n = ⌈ν⌉ (⌈·⌉ is the ceiling of the number), f ∈ ACn ([a, b]) (space of functions f with f(n−1) ∈ AC ([a, b]), absolutely continuous functions). We call left Caputo fractional derivative (see [13], pp. 49-52) the function Dν∗af (x) = 1 Γ (n − ν) ∫x a (x − t) n−ν−1 f(n) (t) dt, (1) ∀ x ∈ [a, b], where Γ is the gamma function Γ (ν) = ∫∞ 0 e−ttν−1dt, ν > 0. Notice Dν∗af ∈ L1 ([a, b]) and Dν∗af exists a.e.on [a, b]. We set D0∗af (x) = f (x), ∀ x ∈ [a, b] . Definition 2. (see also [3], [14], [15]). Let f ∈ ACm ([a, b]), m = ⌈α⌉, α > 0. The right Caputo fractional derivative of order α > 0 is given by Dαb−f (x) = (−1) m Γ (m − α) ∫b x (ζ − x) m−α−1 f(m) (ζ) dζ, (2) ∀ x ∈ [a, b]. We set D0b−f (x) = f (x) . Notice D α b−f ∈ L1 ([a, b]) and D α b−f exists a.e.on [a, b] . Convention 3. We assume that Dα∗x0f (x) = 0, for x < x0, (3) and Dαx0−f (x) = 0, for x > x0, (4) for all x, x0 ∈ [a, b] . We mention Proposition 4. (by [5]) Let f ∈ Cn ([a, b]), n = ⌈ν⌉, ν > 0. Then Dν∗af (x) is continuous in x ∈ [a, b] . Also we have Proposition 5. (by [5]) Let f ∈ Cm ([a, b]), m = ⌈α⌉, α > 0. Then Dαb−f (x) is continuous in x ∈ [a, b] . Theorem 6. ([5]) Let f ∈ Cm ([a, b]), m = ⌈α⌉, α > 0, x, x0 ∈ [a, b]. Then D α ∗x0 f (x), Dαx0−f (x) are jointly continuous functions in (x, x0) from [a, b] 2 into R. We mention the left Caputo fractional Taylor formula with integral remainder. Theorem 7. ([13], p. 54) Let f ∈ ACm ([a, b]), [a, b] ⊂ R, m = ⌈α⌉, α > 0. Then f (x) = m−1∑ k=0 f(k) (x0) k! (x − x0) k + 1 Γ (α) ∫x x0 (x − J) α−1 Dα∗x0f (J) dJ, (5) ∀ x ≥ x0; x, x0 ∈ [a, b] . CUBO 14, 3 (2012) Fractional Voronovskaya type asymptotic ... 73 Also we mention the right Caputo fractional Taylor formula. Theorem 8. ([3]) Let f ∈ ACm ([a, b]), [a, b] ⊂ R, m = ⌈α⌉, α > 0. Then f (x) = m−1∑ j=0 f(k) (x0) k! (x − x0) k + 1 Γ (α) ∫x0 x (J − x) α−1 Dαx0−f (J) dJ, (6) ∀ x ≤ x0; x, x0 ∈ [a, b] . For more on fractional calculus related to this work see [2], [4] and [7]. We consider here the sigmoidal function of logarithmic type s (x) = 1 1 + e−x , x ∈ R. It has the properties lim x→+∞ s (x) = 1 and lim x→−∞ s (x) = 0. This function plays the role of an activation function in the hidden layer of neural networks. As in [12], we consider Φ (x) := 1 2 (s (x + 1) − s (x − 1)) , x ∈ R. (7) We notice the following properties: i) Φ (x) > 0, ∀ x ∈ R, ii) ∑∞ k=−∞ Φ (x − k) = 1, ∀ x ∈ R, iii) ∑∞ k=−∞ Φ (nx − k) = 1, ∀ x ∈ R; n ∈ N, iv) ∫∞ −∞ Φ (x) dx = 1, v) Φ is a density function, vi) Φ is even: Φ (−x) = Φ (x), x ≥ 0. We see that ([12]) Φ (x) = ( e2 − 1 2e ) e−x (1 + e−x−1) (1 + e−x+1) = 8 (1.1) ( e2 − 1 2e2 ) 1 (1 + ex−1) (1 + e−x−1) . vii) By [12] Φ is decreasing on R+, and increasing on R−. 74 George A. Anastassiou CUBO 14, 3 (2012) viii) By [11] for n ∈ N, 0 < β < 1, we get ∞∑    k = −∞ : |nx − k| > n1−β Φ (nx − k) < ( e2 − 1 2 ) e−n (1−β) = 3.1992e−n (1−β) . (9) Denote by ⌊·⌋ the integral part of a number. Consider x ∈ [a, b] ⊂ R and n ∈ N such that ⌈na⌉ ≤ ⌊nb⌋. ix) By [11] it holds 1 ∑⌊nb⌋ k=⌈na⌉ Φ (nx − k) < 1 Φ (1) = 5.250312578, ∀ x ∈ [a, b] . (10) x) By [11] it holds lim n→∞ ∑⌊nb⌋ k=⌈na⌉ Φ (nx − k) 6= 1, for at least some x ∈ [a, b]. Let f ∈ C ([a, b]) and n ∈ N such that ⌈na⌉ ≤ ⌊nb⌋. We study further (see also [11]) the quasi-interpolation positive linear neural network operator Gn (f, x) := ∑⌊nb⌋ k=⌈na⌉ f ( k n ) Φ (nx − k) ∑⌊nb⌋ k=⌈na⌉ Φ (nx − k) , x ∈ [a, b] . (11) For large enough n we always obtain ⌈na⌉ ≤ ⌊nb⌋. Also a ≤ k n ≤ b, iff ⌈na⌉ ≤ k ≤ ⌊nb⌋. We also consider here the hyperbolic tangent function tanh x, x ∈ R : tanh x := ex − e−x ex + e−x = e2x − 1 e2x + 1 . It has the properties tanh 0 = 0, −1 < tanh x < 1, ∀ x ∈ R, and tanh (−x) = − tanh x. Furthermore tanh x → 1 as x → ∞, and tanh x → −1, as x → −∞, and it is strictly increasing on R. Furthermore it holds d dx tanh x = 1 cosh2 x > 0. This function plays also the role of an activation function in the hidden layer of neural networks. We further consider Ψ (x) := 1 4 (tanh (x + 1) − tanh (x − 1)) > 0, ∀ x ∈ R. (12) We easily see thatΨ (−x) = Ψ (x), that is Ψ is even on R. Obviously Ψ is differentiable, thus continuous. Here we follow [8] Proposition 9. Ψ (x) for x ≥ 0 is strictly decreasing. CUBO 14, 3 (2012) Fractional Voronovskaya type asymptotic ... 75 Obviously Ψ (x) is strictly increasing for x ≤ 0. Also it holds lim x→−∞ Ψ (x) = 0 = lim x→∞ Ψ (x) . Infact Ψ has the bell shape with horizontal asymptote the x-axis. So the maximum of Ψ is at zero, Ψ (0) = 0.3809297. Theorem 10. We have that ∑∞ i=−∞ Ψ (x − i) = 1, ∀ x ∈ R. Thus ∞∑ i=−∞ Ψ (nx − i) = 1, ∀ n ∈ N, ∀ x ∈ R. Furthermore we get: Since Ψ is even it holds ∑∞ i=−∞ Ψ (i − x) = 1, ∀x ∈ R. Hence ∑∞ i=−∞ Ψ (i + x) = 1, ∀ x ∈ R, and ∑∞ i=−∞ Ψ (x + i) = 1, ∀ x ∈ R. Theorem 11. It holds ∫∞ −∞ Ψ (x) dx = 1. So Ψ (x) is a density function on R. Theorem 12. Let 0 < β < 1 and n ∈ N. It holds ∞∑    k = −∞ : |nx − k| ≥ n1−β Ψ (nx − k) ≤ e4 · e−2n (1−β) . (13) Theorem 13. Let x ∈ [a, b] ⊂ R and n ∈ N so that ⌈na⌉ ≤ ⌊nb⌋. It holds 1 ∑⌊nb⌋ k=⌈na⌉ Ψ (nx − k) < 4.1488766 = 1 Ψ (1) . (14) Also by [8], we obtain lim n→∞ ⌊nb⌋∑ k=⌈na⌉ Ψ (nx − k) 6= 1, (15) for at least some x ∈ [a, b]. Definition 14. Let f ∈ C ([a, b]) and n ∈ N such that ⌈na⌉ ≤ ⌊nb⌋. We further study, as in [8], the quasi-interpolation positive linear neural network operator Fn (f, x) := ∑⌊nb⌋ k=⌈na⌉ f ( k n ) Ψ (nx − k) ∑⌊nb⌋ k=⌈na⌉ Ψ (nx − k) , x ∈ [a, b] . (16) We find here fractional Voronovskaya type asymptotic expansions for Gn (f, x) and Fn (f, x), x ∈ [a, b]. For related work on neural networks also see [1], [6], [9] and [10]. For neural networks in general see [16], [17] and [18]. 76 George A. Anastassiou CUBO 14, 3 (2012) 2 Main Results We present our first main result Theorem 15. Let α > 0, N ∈ N, N = ⌈α⌉, f ∈ ACN ([a, b]), 0 < β < 1, x ∈ [a, b], n ∈ N large enough. Assume that ∥ ∥Dαx−f ∥ ∥ ∞,[a,x] , ‖Dα∗xf‖∞,[x,b] ≤ M, M > 0. Then Gn (f, x) − f (x) = N−1∑ j=1 f(j) (x) j! Gn ( (· − x) j ) (x) + o ( 1 nβ(α−ε) ) , (17) where 0 < ε ≤ α. If N = 1, the sum in (17) collapses. The last (17) implies that nβ(α−ε)  Gn (f, x) − f (x) − N−1∑ j=1 f(j) (x) j! Gn ( (· − x) j ) (x)   → 0, (18) as n → ∞, 0 < ε ≤ α. When N = 1, or f(j) (x) = 0, j = 1, ..., N − 1, then we derive that nβ(α−ε) [Gn (f, x) − f (x)] → 0 as n → ∞, 0 < ε ≤ α. Of great interest is the case of α = 1 2 . Proof. From [13], p. 54; (5), we get by the left Caputo fractional Taylor formula that f ( k n ) = N−1∑ j=0 f(j) (x) j! ( k n − x )j + 1 Γ (α) ∫ k n x ( k n − J )α−1 Dα∗xf (J) dJ, (19) for all x ≤ k n ≤ b. Also from [3]; (6), using the right Caputo fractional Taylor formula we get f ( k n ) = N−1∑ j=0 f(j) (x) j! ( k n − x )j + 1 Γ (α) ∫x k n ( J − k n )α−1 Dαx−f (J) dJ, (20) for all a ≤ k n ≤ x. We call V (x) := ⌊nb⌋∑ k=⌈na⌉ Φ (nx − k) . (21) Hence we have f ( k n ) Φ (nx − k) V (x) = N−1∑ j=0 f(j) (x) j! Φ (nx − k) V (x) ( k n − x )j + (22) CUBO 14, 3 (2012) Fractional Voronovskaya type asymptotic ... 77 Φ (nx − k) V (x) Γ (α) ∫ k n x ( k n − J )α−1 Dα∗xf (J) dJ, all x ≤ k n ≤ b, iff ⌈nx⌉ ≤ k ≤ ⌊nb⌋, and f ( k n ) Φ (nx − k) V (x) = N−1∑ j=0 f(j) (x) j! Φ (nx − k) V (x) ( k n − x )j + (23) Φ (nx − k) V (x) Γ (α) ∫x k n ( J − k n )α−1 Dαx−f (J) dJ, for all a ≤ k n ≤ x, iff ⌈na⌉ ≤ k ≤ ⌊nx⌋. We have that ⌈nx⌉ ≤ ⌊nx⌋ + 1. Therefore it holds ⌊nb⌋∑ k=⌊nx⌋+1 f ( k n ) Φ (nx − k) V (x) = N−1∑ j=0 f(j) (x) j! ⌊nb⌋∑ k=⌊nx⌋+1 Φ (nx − k) ( k n − x )j V (x) + (24) 1 Γ (α)   ∑⌊nb⌋ k=⌊nx⌋+1 Φ (nx − k) V (x) ∫ k n x ( k n − J )α−1 Dα∗xf (J) dJ   , and ⌊nx⌋∑ k=⌈na⌉ f ( k n ) Φ (nx − k) V (x) = N−1∑ j=0 f(j) (x) j! ⌊nx⌋∑ k=⌈na⌉ Φ (nx − k) V (x) ( k n − x )j + (25) 1 Γ (α)   ⌊nx⌋∑ k=⌈na⌉ Φ (nx − k) V (x) ∫x k n ( J − k n )α−1 Dαx−f (J) dJ   . Adding the last two equalities (24) and (25) we obtain Gn (f, x) = ⌊nb⌋∑ k=⌈na⌉ f ( k n ) Φ (nx − k) V (x) = (26) N−1∑ j=0 f(j) (x) j! ⌊nb⌋∑ k=⌈na⌉ Φ (nx − k) V (x) ( k n − x )j + 1 Γ (α) V (x)    ⌊nx⌋∑ k=⌈na⌉ Φ (nx − k) ∫x k n ( J − k n )α−1 Dαx−f (J) dJ+ ⌊nb⌋∑ k=⌊nx⌋+1 Φ (nx − k) ∫ k n x ( k n − J )α−1 (Dα∗xf (J)) dJ    . 78 George A. Anastassiou CUBO 14, 3 (2012) So we have derived T (x) := Gn (f, x) − f (x) − N−1∑ j=1 f(j) (x) j! Gn ( (· − x) j ) (x) = θ∗n (x) , (27) where θ∗n (x) := 1 Γ (α) V (x)    ⌊nx⌋∑ k=⌈na⌉ Φ (nx − k) ∫x k n ( J − k n )α−1 Dαx−f (J) dJ + ⌊nb⌋∑ k=⌊nx⌋+1 Φ (nx − k) ∫ k n x ( k n − J )α−1 Dα∗xf (J) dJ    . (28) We set θ∗1n (x) := 1 Γ (α)   ∑⌊nx⌋ k=⌈na⌉ Φ (nx − k) V (x) ∫x k n ( J − k n )α−1 Dαx−f (J) dJ   , (29) and θ∗2n := 1 Γ (α)   ∑⌊nb⌋ k=⌊nx⌋+1 Φ (nx − k) V (x) ∫ k n x ( k n − J )α−1 Dα∗xf (J) dJ   , (30) i.e. θ∗n (x) = θ ∗ 1n (x) + θ ∗ 2n (x) . (31) We assume b − a > 1 nβ , 0 < β < 1, which is always the case for large enough n ∈ N, that is when n > ⌈ (b − a) − 1 β ⌉ . It is always true that either ∣ ∣ k n − x ∣ ∣ ≤ 1 nβ or ∣ ∣ k n − x ∣ ∣ > 1 nβ . For k = ⌈na⌉ , ..., ⌊nx⌋, we consider γ1k := ∣ ∣ ∣ ∣ ∣ ∫x k n ( J − k n )α−1 Dαx−f (J) dJ ∣ ∣ ∣ ∣ ∣ ≤ (32) ∫x k n ( J − k n )α−1 ∣ ∣Dαx−f (J) ∣ ∣ dJ ≤ ∥ ∥Dαx−f ∥ ∥ ∞,[a,x] ( x − κ n )α α ≤ ∥ ∥Dαx−f ∥ ∥ ∞,[a,x] (x − a) α α . (33) That is γ1k ≤ ∥ ∥Dαx−f ∥ ∥ ∞,[a,x] (x − a) α α , (34) for k = ⌈na⌉ , ..., ⌊nx⌋ . Also we have in case of ∣ ∣ k n − x ∣ ∣ ≤ 1 nβ that γ1k ≤ ∫x k n ( J − k n )α−1 ∣ ∣Dαx−f (J) ∣ ∣ dJ (35) CUBO 14, 3 (2012) Fractional Voronovskaya type asymptotic ... 79 ≤ ∥ ∥Dαx−f ∥ ∥ ∞,[a,x] ( x − κ n )α α ≤ ∥ ∥Dαx−f ∥ ∥ ∞,[a,x] 1 nαβα . So that, when ( x − k n ) ≤ 1 nβ , we get γ1k ≤ ∥ ∥Dαx−f ∥ ∥ ∞,[a,x] 1 αnaβ . (36) Therefore |θ∗1n (x)| ≤ 1 Γ (α)   ∑⌊nx⌋ k=⌈na⌉ Φ (nx − k) V (x) γ1k   = 1 Γ (α) ·    ∑⌊nx⌋    k = ⌈na⌉ : ∣ ∣ k n − x ∣ ∣ ≤ 1 nβ Φ (nx − k) V (x) γ1k + ∑⌊nx⌋    k = ⌈na⌉ : ∣ ∣ k n − x ∣ ∣ > 1 nβ Φ (nx − k) V (x) γ1k    ≤ 1 Γ (α)             ∑⌊nx⌋    k = ⌈na⌉ : ∣ ∣ k n − x ∣ ∣ ≤ 1 nβ Φ (nx − k) V (x)          ∥ ∥Dαx−f ∥ ∥ ∞,[a,x] 1 αnαβ + 1 V (x)           ⌊nx⌋∑    k = ⌈na⌉ : ∣ ∣ k n − x ∣ ∣ > 1 nβ Φ (nx − k)           ∥ ∥Dαx−f ∥ ∥ ∞,[a,x] (x − a) α α    (by (9), (10)) ≤ (37) ∥ ∥Dαx−f ∥ ∥ ∞,[a,x] Γ (α + 1) { 1 nαβ + (5.250312578) (3.1992) e−n (1−β) (x − a) α } . Therefore we proved |θ∗1n (x)| ≤ ∥ ∥Dαx−f ∥ ∥ ∞,[a,x] Γ (α + 1) { 1 nαβ + (16.7968) e−n (1−β) (x − a) α } . (38) But for large enough n ∈ N we get |θ∗1n (x)| ≤ 2 ∥ ∥Dαx−f ∥ ∥ ∞,[a,x] Γ (α + 1) nαβ . (39) 80 George A. Anastassiou CUBO 14, 3 (2012) Similarly we have γ2k := ∣ ∣ ∣ ∣ ∣ ∫ k n x ( k n − J )α−1 Dα∗xf (J) dJ ∣ ∣ ∣ ∣ ∣ ≤ ∫ k n x ( k n − J )α−1 |Dα∗xf (J)| dJ ≤ ‖Dα∗xf‖∞,[x,b] ( k n − x )α α ≤ ‖Dα∗xf‖∞,[x,b] (b − x) α α . (40) That is γ2k ≤ ‖D α ∗xf‖∞,[x,b] (b − x) α α , (41) for k = ⌊nx⌋ + 1, ..., ⌊nb⌋ . Also we have in case of ∣ ∣ k n − x ∣ ∣ ≤ 1 nβ that γ2k ≤ ‖Dα∗xf‖∞,[x,b] αnαβ . (42) Consequently it holds |θ∗2n (x)| ≤ 1 Γ (α)   ∑⌊nb⌋ k=⌊nx⌋+1 Φ (nx − k) V (x) γ2k   = 1 Γ (α)             ∑⌊nb⌋    k = ⌊nx⌋ + 1 : ∣ ∣ k n − x ∣ ∣ ≤ 1 nβ Φ (nx − k) V (x)          ‖Dα∗xf‖∞,[x,b] αnαβ + 1 V (x)           ⌊nb⌋∑    k = ⌊nx⌋ + 1 : ∣ ∣ k n − x ∣ ∣ > 1 nβ Φ (nx − k)           ‖Dα∗xf‖∞,[x,b] (b − x) α α    ≤ ‖Dα∗xf‖∞,[x,b] Γ (α + 1) { 1 nαβ + (16.7968) e−n (1−β) (b − x) α } . (43) That is |θ∗2n (x)| ≤ ‖Dα∗xf‖∞,[x,b] Γ (α + 1) { 1 nαβ + (16.7968) e−n (1−β) (b − x) α } . (44) But for large enough n ∈ N we get |θ∗2n (x)| ≤ 2 ‖Dα∗xf‖∞,[x,b] Γ (α + 1) nαβ . (45) CUBO 14, 3 (2012) Fractional Voronovskaya type asymptotic ... 81 Since ∥ ∥Dαx−f ∥ ∥ ∞,[a,x] , ‖Dα∗xf‖∞,[x,b] ≤ M, M > 0, we derive |θ∗n (x)| ≤ |θ ∗ 1n (x)| + |θ ∗ 2n (x)| (by (39), (45)) ≤ 4M Γ (α + 1) nαβ . (46) That is for large enough n ∈ N we get |T (x)| = |θ∗n (x)| ≤ ( 4M Γ (α + 1) ) ( 1 nαβ ) , (47) resulting to |T (x)| = O ( 1 nαβ ) , (48) and |T (x)| = o (1) . (49) And, letting 0 < ε ≤ α, we derive |T (x)| ( 1 nβ(α−ε) ) ≤ ( 4M Γ (α + 1) ) ( 1 nβε ) → 0, (50) as n → ∞. I.e. |T (x)| = o ( 1 nβ(α−ε) ) , (51) proving the claim. We present our second main result Theorem 16. Let α > 0, N ∈ N, N = ⌈α⌉, f ∈ ACN ([a, b]), 0 < β < 1, x ∈ [a, b], n ∈ N large enough. Assume that ∥ ∥Dαx−f ∥ ∥ ∞,[a,x] , ‖Dα∗xf‖∞,[x,b] ≤ M, M > 0. Then Fn (f, x) − f (x) = N−1∑ j=1 f(j) (x) j! Fn ( (· − x) j ) (x) + o ( 1 nβ(α−ε) ) , (52) where 0 < ε ≤ α. If N = 1, the sum in (52) collapses. The last (52) implies that nβ(α−ε)  Fn (f, x) − f (x) − N−1∑ j=1 f(j) (x) j! Fn ( (· − x) j ) (x)   → 0, (53) as n → ∞, 0 < ε ≤ α. When N = 1, or f(j) (x) = 0, j = 1, ..., N − 1, then we derive that nβ(α−ε) [Fn (f, x) − f (x)] → 0 as n → ∞, 0 < ε ≤ α. Of great interest is the case of α = 1 2 . 82 George A. Anastassiou CUBO 14, 3 (2012) Proof. Similar to Theorem 15, using (13) and (14). Received: December 2011. Revised: May 2012. References [1] G.A. Anastassiou, Rate of convergence of some neural network operators to the unit-univariate case, J. Math. Anal. Appli. 212 (1997), 237-262. [2] G.A. Anastassiou, Quantitative Approximations, Chapman&Hall/CRC, Boca Raton, New York, 2001. [3] G.A. Anastassiou, On Right Fractional Calculus, Chaos, solitons and fractals, 42 (2009), 365- 376. [4] G.A. Anastassiou, Fractional Differentiation Inequalities, Springer, New York, 2009. [5] G. Anastassiou, Fractional Korovkin theory, Chaos, Solitons & Fractals, Vol. 42, No. 4 (2009), 2080-2094. [6] G.A. Anastassiou, Inteligent Systems: Approximation by Artificial Neural Networks, Intelli- gent Systems Reference Library, Vol. 19, Springer, Heidelberg, 2011. [7] G.A. Anastassiou, Fractional representation formulae and right fractional inequalities, Math- ematical and Computer Modelling, Vol. 54, no. 11-12 (2011), 3098-3115. [8] G.A. Anastassiou, Univariate hyperbolic tangent neural network approximation, Mathematics and Computer Modelling, 53(2011), 1111-1132. [9] G.A. Anastassiou, Multivariate hyperbolic tangent neural network approximation, Computers and Mathematics 61(2011), 809-821. [10] G.A. Anastassiou, Multivariate sigmoidal neural network approximation, Neural Networks 24(2011), 378-386. [11] G.A. Anastassiou, Univariate sigmoidal neural network approximation, submitted for publi- cation, accepted, J. of Computational Analysis and Applications, 2011. [12] Z. Chen and F. Cao, The approximation operators with sigmoidal functions, Computers and Mathematics with Applications, 58 (2009), 758-765. [13] K. Diethelm, The Analysis of Fractional Differential Equations, Lecture Notes in Mathematics 2004, Springer-Verlag, Berlin, Heidelberg, 2010. [14] A.M.A. El-Sayed and M. Gaber, On the finite Caputo and finite Riesz derivatives, Electronic Journal of Theoretical Physics, Vol. 3, No. 12 (2006), 81-95. CUBO 14, 3 (2012) Fractional Voronovskaya type asymptotic ... 83 [15] G.S. Frederico and D.F.M. Torres, Fractional Optimal Control in the sense of Caputo and the fractional Noether’s theorem, International Mathematical Forum, Vol. 3, No. 10 (2008), 479-493. [16] S. Haykin, Neural Networks: A Comprehensive Foundation (2 ed.), Prentice Hall, New York, 1998. [17] W. McCulloch and W. Pitts, A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics, 7 (1943), 115-133. [18] T.M. Mitchell, Machine Learning, WCB-McGraw-Hill, New York, 1997.