CUBO A Mathematical Journal Vol.16, No¯ 02, (33–47). June 2014 Voronovskaya type asymptotic expansions for multivariate quasi-interpolation neural network operators George A. Anastassiou Department of Mathematical Sciences, University of Memphis, Memphis, TN 38152, U.S.A. ganastss@memphis.edu ABSTRACT Here we study further the multivariate quasi-interpolation of sigmoidal and hyperbolic tangent types neural network operators of one hidden layer. We derive multivariate Voronovskaya type asymptotic expansions for the error of approximation of these op- erators to the unit operator. RESUMEN Aqúı estudiamos extensiones de la cuasi-interpolación multivariada de operadores de redes neuronales de tipo sigmoidal y tangente hiperbólica de una capa oculta. Obten- emos expansiones asintóticas del tipo Voronovskaya para el error de aproximación de estos operadores para el operador unidad. Keywords and Phrases: Multivariate Neural Network Approximation, multivariate Voronovskaya type asymptotic expansion. 2010 AMS Mathematics Subject Classification: 41A25, 41A36, 41A60. 34 George A. Anastassiou CUBO 16, 2 (2014) 1 Background Here we follow [5], [6]. We consider here the sigmoidal function of logarithmic type si (xi) = 1 1 + e−xi , xi ∈ R, i = 1, ..., N; x := (x1, ..., xN) ∈ R N, each has the properties lim xi→+∞ si (xi) = 1 and lim xi→−∞ si (xi) = 0, i = 1, ..., N. These functions play the role of activation functions in the hidden layer of neural networks. As in [7], we consider Φi (xi) := 1 2 (si (xi + 1) − si (xi − 1)) , xi ∈ R, i = 1, ..., N. We notice the following properties: i) Φi (xi) > 0, ∀ xi ∈ R, ii) ∑∞ ki=−∞ Φi (xi − ki) = 1, ∀ xi ∈ R, iii) ∑∞ ki=−∞ Φi (nxi − ki) = 1, ∀ xi ∈ R; n ∈ N, iv) ∫∞ −∞ Φi (xi) dxi = 1, v) Φi is a density function, vi) Φi is even: Φi (−xi) = Φi (xi), xi ≥ 0, for i = 1, ..., N. We see that ([7]) Φi (xi) = ( e2 − 1 2e2 ) 1 (1 + exi−1) (1 + e−xi−1) , i = 1, ..., N. vii) Φi is decreasing on R+, and increasing on R−, i = 1, ..., N. Let 0 < β < 1, n ∈ N. Then as in [6] we get viii) ∞∑ ⎧ ⎪⎨ ⎪⎩ ki = −∞ : |nxi − ki| > n 1−β Φi (nxi − ki) ≤ 3.1992e −n(1−β), i = 1, ..., N. CUBO 16, 2 (2014) Voronovskaya type asymptotic expansions for multivariate . . . 35 Denote by ⌈·⌉ the ceiling of a number, and by ⌊·⌋ the integral part of a number. Consider here x ∈ (∏N i=1 [ai, bi] ) ⊂ RN, N ∈ N such that ⌈nai⌉ ≤ ⌊nbi⌋, i = 1, ..., N; a := (a1, ..., aN), b := (b1, ..., bN) . As in [6] we obtain ix) 0 < 1 ∑⌊nbi⌋ ki=⌈nai⌉ Φi (nxi − ki) < 1 Φi (1) = 5.250312578, ∀ xi ∈ [ai, bi] , i = 1, ..., N. x) As in [6], we see that lim n→∞ ⌊nbi⌋∑ ki=⌈nai⌉ Φi (nxi − ki) ̸= 1, for at least some xi ∈ [ai, bi], i = 1, ..., N. We will use here Φ (x1, ..., xN) := Φ (x) := N∏ i=1 Φi (xi) , x ∈ R N. (1) It has the properties: (i)’ Φ (x) > 0, ∀ x ∈ RN, We see that ∞∑ k1=−∞ ∞∑ k2=−∞ ... ∞∑ kN=−∞ Φ (x1 − k1, x2 − k2, ..., xN − kN) = ∞∑ k1=−∞ ∞∑ k2=−∞ ... ∞∑ kN=−∞ N∏ i=1 Φi (xi − ki) = N∏ i=1 ( ∞∑ ki=−∞ Φi (xi − ki) ) = 1. That is (ii)’ ∞∑ k=−∞ Φ (x − k) := ∞∑ k1=−∞ ∞∑ k2=−∞ ... ∞∑ kN=−∞ Φ (x1 − k1, ..., xN − kN) = 1, k := (k1, ..., kN), ∀ x ∈ R N. (iii)’ ∞∑ k=−∞ Φ (nx − k) := ∞∑ k1=−∞ ∞∑ k2=−∞ ... ∞∑ kN=−∞ Φ (nx1 − k1, ..., nxN − kN) = 1, ∀ x ∈ RN; n ∈ N. 36 George A. Anastassiou CUBO 16, 2 (2014) (iv)’ ∫ RN Φ (x) dx = 1, that is Φ is a multivariate density function. Here ∥x∥ ∞ := max {|x1| , ..., |xN|}, x ∈ R N, also set ∞ := (∞, ..., ∞), −∞ := (−∞, ..., −∞) upon the multivariate context, and ⌈na⌉ : = (⌈na1⌉ , ..., ⌈naN⌉) , ⌊nb⌋ : = (⌊nb1⌋ , ..., ⌊nbN⌋) . For 0 < β < 1 and n ∈ N, fixed x ∈ RN, have that ⌊nb⌋∑ k=⌈na⌉ Φ (nx − k) = ⌊nb⌋∑ ⎧ ⎪⎨ ⎪⎩ k = ⌈na⌉ ∥ ∥ k n − x ∥ ∥ ∞ ≤ 1 nβ Φ (nx − k) + ⌊nb⌋∑ ⎧ ⎪⎨ ⎪⎩ k = ⌈na⌉ ∥ ∥ k n − x ∥ ∥ ∞ > 1 nβ Φ (nx − k) . In the last two sums the counting is over disjoint vector of k’s, because the condition ∥ ∥ k n − x ∥ ∥ ∞ > 1 nβ implies that there exists at least one ∣ ∣ kr n − xr ∣ ∣ > 1 nβ , r ∈ {1, ..., N}. It holds (v)’ ⌊nb⌋∑ ⎧ ⎪⎨ ⎪⎩ k = ⌈na⌉ ∥ ∥ k n − x ∥ ∥ ∞ > 1 nβ Φ (nx − k) ≤ 3.1992e−n (1−β) , 0 < β < 1, n ∈ N, x ∈ (∏N i=1 [ai, bi] ) . Furthermore it holds (vi)’ 0 < 1 ∑⌊nb⌋ k=⌈na⌉ Φ (nx − k) < (5.250312578) N , ∀ x ∈ (∏N i=1 [ai, bi] ) , n ∈ N. It is clear also that CUBO 16, 2 (2014) Voronovskaya type asymptotic expansions for multivariate . . . 37 (vii)’ ∞∑ ⎧ ⎪⎨ ⎪⎩ k = −∞ ∥ ∥ k n − x ∥ ∥ ∞ > 1 nβ Φ (nx − k) ≤ 3.1992e−n (1−β) , 0 < β < 1, n ∈ N, x ∈ RN. By (x) we obviously see that (viii)’ lim n→∞ ⌊nb⌋∑ k=⌈na⌉ Φ (nx − k) ̸= 1 for at least some x ∈ (∏N i=1 [ai, bi] ) . Let f ∈ C (∏N i=1 [ai, bi] ) and n ∈ N such that ⌈nai⌉ ≤ ⌊nbi⌋, i = 1, ..., N. We define the multivariate positive linear neural network operator (x := (x1, ..., xN) ∈ (∏N i=1 [ai, bi] ) ) Gn (f, x1, ..., xN) := Gn (f, x) := ∑⌊nb⌋ k=⌈na⌉ f ( k n ) Φ (nx − k) ∑⌊nb⌋ k=⌈na⌉ Φ (nx − k) (2) := ∑⌊nb1⌋ k1=⌈na1⌉ ∑⌊nb2⌋ k2=⌈na2⌉ ... ∑⌊nbN⌋ kN=⌈naN⌉ f ( k1 n , ..., kN n ) (∏N i=1 Φi (nxi − ki) ) ∏N i=1 (∑⌊nbi⌋ ki=⌈nai⌉ Φi (nxi − ki) ) . For large enough n we always obtain ⌈nai⌉ ≤ ⌊nbi⌋, i = 1, ..., N. Also ai ≤ ki n ≤ bi, iff ⌈nai⌉ ≤ ki ≤ ⌊nbi⌋, i = 1, ..., N. Notice here that for large enough n ∈ N we get that e−n (1−β) < n−βj, j = 1, ..., m ∈ N, 0 < β < 1. Thus be given fixed A, B > 0, for the linear combination ( An−βj + Be−n (1−β) ) the (dominant) rate of convergence to zero is n−βj. The closer β is to 1 we get faster and better rate of convergence to zero. By ACm (∏N i=1 [ai, bi] ) , m, N ∈ N, we denote the space of functions such that all par- tial derivatives of order (m − 1) are coordinatewise alsolutely continuous functions, also f ∈ Cm−1 (∏N i=1 [ai, bi] ) . 38 George A. Anastassiou CUBO 16, 2 (2014) Let f ∈ ACm (∏N i=1 [ai, bi] ) , m, N ∈ N. Here fα denotes a partial derivative of f, α := (α1, ..., αN), αi ∈ Z +, i = 1, ..., N, and |α| := ∑N i=1 αi = l, where l = 0, 1, ..., m. We write also fα := ∂αf ∂xα and we say it is order l. We denote ∥fα∥ max ∞,m := max |α|=m {∥fα∥∞}, (3) where ∥·∥ ∞ is the supremum norm. We assume here that ∥fα∥ max ∞,m < ∞. Next we follow [3], [4]. We consider here the hyperbolic tangent function tanh x, x ∈ R : tanh x := ex − e−x ex + e−x . It has the properties tanh 0 = 0, −1 < tanh x < 1, ∀ x ∈ R, and tanh (−x) = − tanh x. Furthermore tanh x → 1 as x → ∞, and tanh x → −1, as x → −∞, and it is strictly increasing on R. This function plays the role of an activation function in the hidden layer of neural networks. We further consider Ψ (x) := 1 4 (tanh (x + 1) − tanh (x − 1)) > 0, ∀ x ∈ R. We easily see that Ψ (−x) = Ψ (x), that is Ψ is even on R. Obviously Ψ is differentiable, thus continuous. Proposition 1. ([3]) Ψ (x) for x ≥ 0 is strictly decreasing. Obviously Ψ (x) is strictly increasing for x ≤ 0. Also it holds lim x→−∞ Ψ (x) = 0 = lim x→∞ Ψ (x) . Infact Ψ has the bell shape with horizontal asymptote the x-axis. So the maximum of Ψ is zero, Ψ (0) = 0.3809297. Theorem 2. ([3]) We have that ∑∞ i=−∞ Ψ (x − i) = 1, ∀ x ∈ R. Thus ∞∑ i=−∞ Ψ (nx − i) = 1, ∀ n ∈ N, ∀ x ∈ R. Also it holds ∞∑ i=−∞ Ψ (x + i) = 1, ∀x ∈ R. Theorem 3. ([3]) It holds ∫∞ −∞ Ψ (x) dx = 1. So Ψ (x) is a density function on R. CUBO 16, 2 (2014) Voronovskaya type asymptotic expansions for multivariate . . . 39 Theorem 4. ([3]) Let 0 < α < 1 and n ∈ N. It holds ∞∑ ⎧ ⎪⎨ ⎪⎩ k = −∞ : |nx − k| ≥ n1−α Ψ (nx − k) ≤ e4 · e−2n (1−α) . Theorem 5. ([3]) Let x ∈ [a, b] ⊂ R and n ∈ N so that ⌈na⌉ ≤ ⌊nb⌋. It holds 1 ∑⌊nb⌋ k=⌈na⌉ Ψ (nx − k) < 1 Ψ (1) = 4.1488766. Also by [3] we get that lim n→∞ ⌊nb⌋∑ k=⌈na⌉ Ψ (nx − k) ̸= 1, for at least some x ∈ [a, b]. In this article we will use Θ (x1, ..., xN) := Θ (x) := N∏ i=1 Ψ (xi) , x = (x1, ..., xN) ∈ R N, N ∈ N. (4) It has the properties: (i)∗ Θ (x) > 0, ∀ x ∈ RN, (ii)∗ ∞∑ k=−∞ Θ (x − k) := ∞∑ k1=−∞ ∞∑ k2=−∞ ... ∞∑ kN=−∞ Θ (x1 − k1, ..., xN − kN) = 1, where k := (k1, ..., kN), ∀ x ∈ R N. (iii)∗ ∞∑ k=−∞ Θ (nx − k) := ∞∑ k1=−∞ ∞∑ k2=−∞ ... ∞∑ kN=−∞ Θ (nx1 − k1, ..., nxN − kN) = 1, ∀ x ∈ RN; n ∈ N. (iv)∗ ∫ RN Θ (x) dx = 1, that is Θ is a multivariate density function. 40 George A. Anastassiou CUBO 16, 2 (2014) We obviously see that ⌊nb⌋∑ k=⌈na⌉ Θ (nx − k) = ⌊nb⌋∑ k=⌈na⌉ N∏ i=1 Ψ (nxi − ki) = ⌊nb1⌋∑ k1=⌈na1⌉ ... ⌊nbN⌋∑ kN=⌈naN⌉ N∏ i=1 Ψ (nxi − ki) = N∏ i=1 ⎛ ⎝ ⌊nbi⌋∑ ki=⌈nai⌉ Ψ (nxi − ki) ⎞ ⎠ . For 0 < β < 1 and n ∈ N, fixed x ∈ RN, we have that ⌊nb⌋∑ k=⌈na⌉ Θ (nx − k) = ⌊nb⌋∑ ⎧ ⎪⎨ ⎪⎩ k = ⌈na⌉ ∥ ∥ k n − x ∥ ∥ ∞ ≤ 1 nβ Θ (nx − k) + ⌊nb⌋∑ ⎧ ⎪⎨ ⎪⎩ k = ⌈na⌉ ∥ ∥ k n − x ∥ ∥ ∞ > 1 nβ Θ (nx − k) . In the last two sums the counting is over disjoint vector of k’s, because the condition ∥ ∥ k n − x ∥ ∥ ∞ > 1 nβ implies that there exists at least one ∣ ∣ kr n − xr ∣ ∣ > 1 nβ , r ∈ {1, ..., N}. Il holds (v)∗ ⌊nb⌋∑ ⎧ ⎪⎨ ⎪⎩ k = ⌈na⌉ ∥ ∥ k n − x ∥ ∥ ∞ > 1 nβ Θ (nx − k) ≤ e4 · e−2n (1−β) , 0 < β < 1, n ∈ N, x ∈ (∏N i=1 [ai, bi] ) . Also it holds (vi)∗ 0 < 1 ∑⌊nb⌋ k=⌈na⌉ Θ (nx − k) < 1 (Ψ (1)) N = (4.1488766) N , ∀ x ∈ (∏N i=1 [ai, bi] ) , n ∈ N. It is clear that (vii)∗ ∞∑ ⎧ ⎪⎨ ⎪⎩ k = −∞ ∥ ∥ k n − x ∥ ∥ ∞ > 1 nβ Θ (nx − k) ≤ e4 · e−2n (1−β) , 0 < β < 1, n ∈ N, x ∈ RN. CUBO 16, 2 (2014) Voronovskaya type asymptotic expansions for multivariate . . . 41 Also we get lim n→∞ ⌊nb⌋∑ k=⌈na⌉ Θ (nx − k) ̸= 1, for at least some x ∈ (∏N i=1 [ai, bi] ) . Let f ∈ C (∏N i=1 [ai, bi] ) and n ∈ N such that ⌈nai⌉ ≤ ⌊nbi⌋, i = 1, ..., N. We define the multivariate positive linear neural network operator (x := (x1, ..., xN) ∈ (∏N i=1 [ai, bi] ) ) Fn (f, x1, ..., xN) := Fn (f, x) := ∑⌊nb⌋ k=⌈na⌉ f ( k n ) Θ (nx − k) ∑⌊nb⌋ k=⌈na⌉ Θ (nx − k) (5) := ∑⌊nb1⌋ k1=⌈na1⌉ ∑⌊nb2⌋ k2=⌈na2⌉ ... ∑⌊nbN⌋ kN=⌈naN⌉ f ( k1 n , ..., kN n ) (∏N i=1 Ψ (nxi − ki) ) ∏N i=1 (∑⌊nbi⌋ ki=⌈nai⌉ Ψ (nxi − ki) ) . Our considered neural networks here are of one hidden layer. In this article we find Voronovskaya type asymptotic expansions for the above described neural networks quasi-interpolation normalized operators Gn (f, x), Fn (f, x), where x ∈ (∏N i=1 [ai, bi] ) is fixed but arbitrary. For other neural networks related work, see [2], [3], [4], [5], [6] and [7]. For neural networks in general, see [8], [9] and [10]. Next we follow [1], pp. 284-286. About Taylor formula -Multivariate Case and Estimates Let Q be a compact convex subset of RN; N ≥ 2; z := (z1, ..., zN) , x0 := (x01, ..., x0N) ∈ Q. Let f : Q → R be such that all partial derivatives of order (m − 1) are coordinatewise abso- lutely continuous functions, m ∈ N. Also f ∈ Cm−1 (Q). That is f ∈ ACm (Q). Each mth order partial derivative is denoted by fα := ∂αf ∂xα , where α := (α1, ..., αN), αi ∈ Z +, i = 1, ..., N and |α| := ∑N i=1 αi = m. Consider gz (t) := f (x0 + t (z − x0)), t ≥ 0. Then g(j)z (t) = ⎡ ⎣ ( N∑ i=1 (zi − x0i) ∂ ∂xi )j f ⎤ ⎦(x01 + t (z1 − x01) , ..., x0N + t (zN − x0N)) , (6) for all j = 0, 1, 2, ..., m. Example 6. Let m = N = 2. Then gz (t) = f (x01 + t (z1 − x01) , x02 + t (z2 − x02)) , t ∈ R, and g′z (t) = (z1 − x01) ∂f ∂x1 (x0 + t (z − x0)) + (z2 − x02) ∂f ∂x2 (x0 + t (z − x0)) . (7) 42 George A. Anastassiou CUBO 16, 2 (2014) Setting (∗) = (x01 + t (z1 − x01) , x02 + t (z2 − x02)) = (x0 + t (z − x0)) , we get g′′z (t) = (z1 − x01) 2 ∂f 2 ∂x21 (∗) + (z1 − x01) (z2 − x02) ∂f2 ∂x2∂x1 (∗) + (z1 − x01) (z2 − x02) ∂f2 ∂x1∂x2 (∗) + (z2 − x02) 2 ∂f 2 ∂x22 (∗) . (8) Similarly, we have the general case of m, N ∈ N for g (m) z (t) . We mention the following multivariate Taylor theorem. Theorem 7. Under the above assumptions we have f (z1, ..., zN) = gz (1) = m−1∑ j=0 g (j) z (0) j! + Rm (z, 0) , (9) where Rm (z, 0) := ∫1 0 (∫t1 0 ... (∫tm−1 0 g(m)z (tm) dtm ) ... ) dt1, (10) or Rm (z, 0) = 1 (m − 1) ! ∫1 0 (1 − θ) m−1 g(m)z (θ) dθ. (11) Notice that gz (0) = f (x0) . We make Remark 8. Assume here that ∥fα∥ max ∞,Q,m := max |α|=m ∥fα∥∞,Q < ∞. Then ∥ ∥ ∥ g(m)z ∥ ∥ ∥ ∞,[0,1] = ∥ ∥ ∥ ∥ ∥ [( N∑ i=1 (zi − x0i) ∂ ∂xi )m f ] (x0 + t (z − x0)) ∥ ∥ ∥ ∥ ∥ ∞,[0,1] ≤ (12) ( N∑ i=1 |zi − x0i| )m ∥fα∥ max ∞,Q,m , that is ∥ ∥ ∥ g(m)z ∥ ∥ ∥ ∞,[0,1] ≤ ( ∥z − x0∥l1 )m ∥fα∥ max ∞,Q,m < ∞. (13) Hence we get by (11) that |Rm (z, 0)| ≤ ∥ ∥ ∥ g (m) z ∥ ∥ ∥ ∞,[0,1] m! < ∞. (14) CUBO 16, 2 (2014) Voronovskaya type asymptotic expansions for multivariate . . . 43 And it holds |Rm (z, 0)| ≤ ( ∥z − x0∥l1 )m m! ∥fα∥ max ∞,Q,m , (15) ∀ z, x0 ∈ Q. Inequality (15) will be an important tool in proving our main results. 2 Main Results We present our first main result Theorem 9. Let 0 < β < 1, x ∈ ∏N i=1 [ai, bi], n ∈ N large enough, f ∈ AC m (∏N i=1 [ai, bi] ) , m, N ∈ N. Assume further that ∥fα∥ max ∞,m < ∞. Then Gn (f, x) − f (x) = m−1∑ j=1 ⎛ ⎝ ∑ |α|=j ( fα (x) ∏N i=1 αi! ) Gn ( N∏ i=1 (· − xi) αi , x ) ⎞ ⎠ + o ( 1 nβ(m−ε) ) , (16) where 0 < ε ≤ m. If m = 1, the sum in (16) collapses. The last (16) implies that nβ(m−ε) ⎡ ⎣Gn (f, x) − f (x) − m−1∑ j=1 ⎛ ⎝ ∑ |α|=j ( fα (x) ∏N i=1 αi! ) Gn ( N∏ i=1 (· − xi) αi , x ) ⎞ ⎠ ⎤ ⎦ (17) → 0, as n → ∞, 0 < ε ≤ m. When m = 1, or fα (x) = 0, for |α| = j, j = 1, ..., m − 1, then we derive that nβ(m−ε) [Gn (f, x) − f (x)] → 0, as n → ∞, 0 < ε ≤ m. Proof. Consider gz (t) := f (x0 + t (z − x0)), t ≥ 0; x0, z ∈ ∏N i=1 [ai, bi]. Then g(j)z (t) = ⎡ ⎣ ( N∑ i=1 (zi − x0i) ∂ ∂xi )j f ⎤ ⎦(x01 + t (z1 − x01) , ..., x0N + t (zN − x0N)) , (18) for all j = 0, 1, ..., m. By (9) we have the multivariate Taylor’s formula f (z1, ..., zN) = gz (1) = m−1∑ j=0 g (j) z (0) j! + 1 (m − 1) ! ∫1 0 (1 − θ) m−1 g(m)z (θ) dθ. (19) 44 George A. Anastassiou CUBO 16, 2 (2014) Notice gz (0) = f (x0). Also for j = 0, 1, ..., m − 1, we have g(j)z (0) = ∑ α:=(α1,...,αN), αi∈Z +, i=1,...,N, |α|:= ∑ N i=1 αi=j ( j! ∏N i=1 αi! )( N∏ i=1 (zi − x0i) αi ) fα (x0) . (20) Furthermore g(m)z (θ) = ∑ α:=(α1,...,αN), αi∈Z +, i=1,...,N, |α|:= ∑N i=1 αi=m ( m! ∏N i=1 αi! )( N∏ i=1 (zi − x0i) αi ) fα (x0 + θ (z − x0)) , (21) 0 ≤ θ ≤ 1. So we treat f ∈ ACm (∏N i=1 [ai, bi] ) with ∥fα∥ max ∞,m < ∞. Thus, by (19) we have for k n , x ∈ (∏N i=1 [ai, bi] ) that f ( k1 n , ..., kN n ) − f (x) = m−1∑ j=1 ∑ α:=(α1,...,αN), αi∈Z +, i=1,...,N, |α|:= ∑N i=1 αi=j ( 1 ∏N i=1 αi! )( N∏ i=1 ( ki n − xi )αi ) fα (x) + R, (22) where R := m ∫1 0 (1 − θ) m−1 ∑ α:=(α1,...,αN), αi∈Z +, i=1,...,N, |α|:= ∑N i=1 αi=m ( 1 ∏N i=1 αi! ) · ( N∏ i=1 ( ki n − xi )αi ) fα ( x + θ ( k n − x )) dθ. (23) By (15) we obtain |R| ≤ ( ∥ ∥x − k n ∥ ∥ l1 )m m! ∥fα∥ max ∞,m . (24) Notice here that ∥ ∥ ∥ ∥ k n − x ∥ ∥ ∥ ∥ ∞ ≤ 1 nβ ⇔ ∣ ∣ ∣ ∣ ki n − xi ∣ ∣ ∣ ∣ ≤ 1 nβ , i = 1, ..., N. (25) So, if ∥ ∥ k n − x ∥ ∥ ∞ ≤ 1 nβ we get that ∥ ∥x − k n ∥ ∥ l1 ≤ N nβ , and |R| ≤ Nm nmβm! ∥fα∥ max ∞,m . (26) Also we see that ∥ ∥ ∥ ∥ x − k n ∥ ∥ ∥ ∥ l1 = N∑ i=1 ∣ ∣ ∣ ∣ xi − ki n ∣ ∣ ∣ ∣ ≤ N∑ i=1 (bi − ai) = ∥b − a∥l1 , CUBO 16, 2 (2014) Voronovskaya type asymptotic expansions for multivariate . . . 45 therefore in general it holds |R| ≤ ( ∥b − a∥l1 )m m! ∥fα∥ max ∞,m . (27) Call V (x) := ⌊nb⌋∑ k=⌈na⌉ Φ (nx − k) . Hence we have Un (x) := ∑⌊nb⌋ k=⌈na⌉ Φ (nx − k) R V (x) = (28) ∑⌊nb⌋ ⎧ ⎪⎨ ⎪⎩ k = ⌈na⌉ : ∥ ∥ k n − x ∥ ∥ ∞ ≤ 1 nβ Φ (nx − k) R V (x) + ∑⌊nb⌋ ⎧ ⎪⎨ ⎪⎩ k = ⌈na⌉ : ∥ ∥ k n − x ∥ ∥ ∞ > 1 nβ Φ (nx − k) R V (x) . Consequently we obtain |Un (x)| ≤ ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ ∑⌊nb⌋ ⎧ ⎪⎨ ⎪⎩ k = ⌈na⌉ : ∥ ∥ k n − x ∥ ∥ ∞ ≤ 1 nβ Φ (nx − k) V (x) ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ ( Nm nmβm! ∥fα∥ max ∞,m ) + 1 V (x) ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ ⌊nb⌋∑ ⎧ ⎪⎨ ⎪⎩ k = ⌈na⌉ : ∥ ∥ k n − x ∥ ∥ ∞ > 1 nβ Φ (nx − k) ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ ( ∥b − a∥l1 )m m! ∥fα∥ max ∞,m (by (v)’, (vi)’) ≤ Nm nmβm! ∥fα∥ max ∞,m + (5.250312578) N (3.1992) e−n (1−β) ( ∥b − a∥l1 )m m! ∥fα∥ max ∞,m . (29) Therefore we have found |Un (x)| ≤ ∥fα∥ max ∞,m m! { Nm nmβ + (5.250312578) N (3.1992) e−n (1−β) ( ∥b − a∥l1 )m } . (30) For large enough n ∈ N we get |Un (x)| ≤ ( 2 ∥fα∥ max ∞,m N m m! )( 1 nmβ ) . (31) That is |Un (x)| = O ( 1 nmβ ) , (32) 46 George A. Anastassiou CUBO 16, 2 (2014) and |Un (x)| = o (1) . (33) And, letting 0 < ε ≤ m, we derive |Un (x)| ( 1 nβ(m−ε) ) ≤ ( 2 ∥fα∥ max ∞,m N m m! ) 1 nβε → 0, (34) as n → ∞. I.e. |Un (x)| = o ( 1 nβ(m−ε) ) . (35) By (22) we observe that ∑⌊nb⌋ k=⌈na⌉ f ( k n ) Φ (nx − k) V (x) − f (x) = m−1∑ j=1 ⎛ ⎝ ∑ |α|=j ( fα (x) ∏N i=1 αi! ) ⎞ ⎠ (∑⌊nb⌋ k=⌈na⌉ Φ (nx − k) (∏N i=1 ( ki n − xi )αi )) V (x) + (36) ∑⌊nb⌋ k=⌈na⌉ Φ (nx − k) R V (x) . The last says Gn (f, x) − f (x) − m−1∑ j=1 ⎛ ⎝ ∑ |α|=j ( fα (x) ∏N i=1 αi! ) Gn ( N∏ i=1 (· − xi) αi , x ) ⎞ ⎠ = Un (x) . (37) The proof of the theorem is complete. We present our second main result Theorem 10. Let 0 < β < 1, x ∈ ∏N i=1 [ai, bi], n ∈ N large enough, f ∈ AC m (∏N i=1 [ai, bi] ) , m, N ∈ N. Assume further that ∥fα∥ max ∞,m < ∞. Then Fn (f, x) − f (x) = m−1∑ j=1 ⎛ ⎝ ∑ |α|=j ( fα (x) ∏N i=1 αi! ) Fn ( N∏ i=1 (· − xi) αi , x ) ⎞ ⎠ + o ( 1 nβ(m−ε) ) , (38) where 0 < ε ≤ m. If m = 1, the sum in (38) collapses. The last (38) implies that nβ(m−ε) ⎡ ⎣Fn (f, x) − f (x) − m−1∑ j=1 ⎛ ⎝ ∑ |α|=j ( fα (x) ∏N i=1 αi! ) Fn ( N∏ i=1 (· − xi) αi , x ) ⎞ ⎠ ⎤ ⎦ (39) CUBO 16, 2 (2014) Voronovskaya type asymptotic expansions for multivariate . . . 47 → 0, as n → ∞, 0 < ε ≤ m. When m = 1, or fα (x) = 0, for |α| = j, j = 1, ..., m − 1, then we derive that nβ(m−ε) [Fn (f, x) − f (x)] → 0, as n → ∞, 0 < ε ≤ m. Proof. Similar to Theorem 9, using the properties of Θ (x), see (4), (i)∗-(vii)∗ and (5). Received: December 2011. Revised: May 2012. References [1] G.A. Anastassiou, Advanced Inequalities, World Scientific Publ. Co., Singapore, New Jersey, 2011. [2] G.A. Anastassiou, Inteligent Systems: Approximation by Artificial Neural Networks, Intelli- gent Systems Reference Library, Vol. 19, Springer, Heidelberg, 2011. [3] G.A. Anastassiou, Univariate hyperbolic tangent neural network approximation, Mathematics and Computer Modelling, 53(2011), 1111-1132. [4] G.A. Anastassiou, Multivariate hyperbolic tangent neural network approximation, Computers and Mathematics 61(2011), 809-821. [5] G.A. Anastassiou, Multivariate sigmoidal neural network approximation, Neural Networks 24(2011), 378-386. [6] G.A. Anastassiou, Univariate sigmoidal neural network approximation, submitted for publi- cation, accepted, J. of Computational Analysis and Applications, 2011. [7] Z. Chen and F. Cao, The approximation operators with sigmoidal functions, Computers and Mathematics with Applications, 58 (2009), 758-765. [8] S. Haykin, Neural Networks: A Comprehensive Foundation (2 ed.), Prentice Hall, New York, 1998. [9] W. McCulloch and W. Pitts, A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics, 7 (1943), 115-133. [10] T.M. Mitchell, Machine Learning, WCB-McGraw-Hill, New York, 1997.