International Journal of Analysis and Applications Volume 19, Number 6 (2021), 929-948 URL: https://doi.org/10.28924/2291-8639 DOI: 10.28924/2291-8639-19-2021-929 Received May 15th, 2021; accepted June 24th, 2021; published November 16th, 2021. 2010 Mathematics Subject Classification. 34A45. Key words and phrases. stiff systems; variable step; variable order; Adams family; Milneโ€™s estimate. ยฉ2021 Authors retain the copyrights of their papers, and all open access articles are distributed under the terms of the Creative Commons Attribution License. 929 A COMPUTATIONAL STRATEGY OF VARIABLE STEP, VARIABLE ORDER FOR SOLVING STIFF SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS J. G. OGHONYON*, P. O. OGUNNIYI, I. F. OGBU Department of Mathematics, Covenant University, Ota, Nigeria *Corresponding author: godwin.oghonyon@covenantuniversity.edu.ng ABSTRACT: This research study focuses on a computational strategy of variable step, variable order (CSVSVO) for solving stiff systems of ordinary differential equations. The idea of Newtonโ€™s interpolation formula combine with divided difference as the basis function approximation will be very useful to design the method. Analysis of the performance strategy of variable step, variable order of the method will be justified. Some examples of stiff systems of ordinary differential equations will be solved to demonstrate the efficiency and accuracy. NOMENCLATURE CSVSVO: errors in CSVSVO for solving test application problem 1, 2 and 3. Memployed: approach employed. Maxerrors: the magnitude of the maximum errors of CSVSVO. ConvCriteria: convergence criteria Source of Application Problem I: see [5] for more info Source of Application Problem II: see [28] for more info. Source of Application Problem III: see [18] for mnore info. https://doi.org/10.28924/2291-8639 https://doi.org/10.28924/2291-8639-19-2021-929 Int. J. Anal. Appl. 19 (6) (2021) 930 1. INTRODUCTION In diverse applied sciences, like chemical kinetics, mass-spring-damper systems, and control system analysis, we find systems of differential equations whose analytical solutions comprise terms with magnitudes that change at rates that are substantially unlike. For instance, whenever the analytical solution includes the terms ๐‘’โˆ’๐‘Ž๐‘ก and ๐‘’โˆ’๐‘๐‘ก, with ๐‘Ž,๐‘ > 0, where the magnitude of ๐‘Ž is majorly greater than ๐‘, then ๐‘’โˆ’๐‘Ž๐‘ก decays to zero at extremely quicker rate than ๐‘’โˆ’๐‘๐‘ก does. In cases of a quickly decaying transient analytical solution, a sure computational technique turns unstable except the step length is immoderately small. Explicit techniques universally are submitted to this stability control, which necessitates the usage of very small step length not only necessarily improve the amount of functions to find the analytic solution, and as such stimulates round-off error to spring up, hence, having bounded accuracy and efficiency. Implicit techniques, then again, are release of stability limitations and are thus favourable for computing stiff systems differential equations [27]. The conception of stiff initial value problems can be best valued by studying the succeeding general one-dimensional systems with changeless constants: ๐‘ฆโ€ฒ = ๐ด๐‘ฆ + ๐ต(๐‘ฅ), ๐‘ฆ(๐‘Ž) = ๐‘ฆ0, (1) where ๐ด is an ๐‘š ร— ๐‘š matrix with actual entries and ๐ต(๐‘ฅ),๐‘ฆ,๐‘ฆโ€ฒ are ๐‘š โˆ’ ๐‘ฃ๐‘’๐‘๐‘ก๐‘œ๐‘Ÿ๐‘ . The theoretic solutions to (1) is seen as ๐‘ฆ(๐‘ฅ) = โˆ‘ ๐›ผ๐‘–๐‘’ ๐œ†๐‘–๐‘ฅ๐‘๐‘– ๐‘š ๐‘–=1 + ๐‘ฆ๐‘(๐‘ฅ), (2) where ๐œ†๐‘–, ๐‘– = 1(1)๐‘š are the eigenvalues of ๐ด , with ๐‘๐‘–, ๐‘– = 1(1)๐‘š the matching eigenvectors. ๐‘ฆ๐‘(๐‘ฅ) is a special solution to (1), and ๐›ผ๐‘–, ๐‘– = 1(1)๐‘š are actual constants that are unambiguously determined by the related initial conditions ๐‘ฆ(๐‘Ž) = ๐‘ฆ0 [14, 18-19]. Definition 1: A solution vector (or solution) of the system (1) on the interval ๐ผ is an ๐‘š ร— 1 matrix (or vector) of the form ๐‘ฆ(๐‘ฅ) = ( ๐‘ฆ1(๐‘ฅ) ๐‘ฆ2(๐‘ฅ) โ‹ฎ ๐‘ฆ๐‘š(๐‘ฅ) ), where the ๐‘ฆ๐‘–(๐‘ฅ) are differentiable functions that gratifies (1) on ๐ผ[1] for details. Definition 2: Any set {๐›ผ1}๐‘–=1 ๐‘š = {( ๐›ผ1๐‘– ๐›ผ2๐‘– โ‹ฎ ๐›ผ๐‘š๐‘– )} ๐‘–=1 ๐‘š of ๐‘š linearly independent solution vectors of ๐‘ฆโ€ฒ = ๐ด๐‘ฆ on an interval ๐ผ is called a fundamental set of solutions of ๐‘ฆโ€ฒ = ๐ด๐‘ฆ on ๐ผ. See [1] for more info. Int. J. Anal. Appl. 19 (6) (2021) 931 Definition 3: The stiffness ratio S of the system (1) is established as ๐‘† = { |๐œ†๐‘–|(๐‘โˆ’๐‘Ž) In(TOL) }, (3) where In(TOL) is the exponential logarithm of TOL. The stiffness ratio as established by (3) is a standard of the dispersion of the fourth dimension constants for (1), and in actual problems, may be of the order of 108. See [8] for more info. Definition 4: The initial value problem (1) is stated to be stiff if it gratifies (4) and (5); i.e., whenever (i) ๐‘Ÿ๐‘’(๐œ†๐‘–) < 0,๐‘– = 1(1)๐‘š, and (ii) the stiffness ratio ๐‘  > 1 . Nevertheless, it should be observed that this is a quite general resolution with respect to mathematics. Stiffness takes place whenever the step length is limited by stability, rather than order, conditions. See [8] for more info. Definition 5: The initial value problem ๐‘ฆโ€ฒ = ๐‘“(๐‘ฅ,๐‘ฆ), ๐‘ฆ(๐‘Ž) = ๐‘ฆ0, ๐‘ฆ = (๐‘ฆ1,๐‘ฆ2,โ€ฆ ,๐‘ฆ๐‘š) ๐‘‡,๐‘ฆ0 = (๐œ‚1,๐œ‚2,โ€ฆ ,๐œ‚๐‘š) ๐‘‡ is stated to stiff oscillatory whenever the eigenvalues ๐œ†๐‘– = ๐‘ข๐‘– + ๐‘—๐‘ฃ๐‘–, ๐‘– = 1(1)๐‘š of the Jacobian ๐ฝ = ( ๐œ•๐‘“ ๐œ•๐‘‘๐‘ฆ ) have the succeeding attributes: ๐‘ข๐‘– < 0,, ๐‘– = 1(1)๐‘š, Max 1โ‰ค๐‘–โ‰ค๐‘š |๐‘ข๐‘–| > min 1โ‰ค๐‘–โ‰ค๐‘š |๐‘ข๐‘–|, or whenever the stiffness ratio gratifies max 1โ‰ค๐‘–โ‰ค๐‘š | ๐‘ข๐‘– ๐‘ข๐‘— | > 1 and |๐‘ข๐‘–| < |๐‘ฃ๐‘–| for at least single pair of ๐‘– โˆˆ 1 โ‰ค ๐‘– โ‰ค ๐‘š. See [8] for more info. Definition 6: Stiffness occurs when stability requirements, rather than those of accuracy constrain the step length. See [18-19] for details. Definition 7: Stiffness occurs when some components of the solution decay much more rapidly than others. See [18-19] for details. Authors have contributed immensely towards solving stiff systems of ordinary differential equations using diverse strategies. [9] implemented the extensions of the predictor- corrector method for the solution of systems of ordinary differential equations. [11] formulated Int. J. Anal. Appl. 19 (6) (2021) 932 the resultant of variable mesh size on the constancy of multi-step methods. [12] constructed the constancy and convergency of variable order multi-step methods. [14] developed the diagonally implicit block backward differentiation formula with optimal stability properties for stiff ordinary differential equations. [15] launched a varying-step, varying-order multistep method for the numerical solution of ordinary differential equations. [16] designed the algorithms for changing the step size. [17] worked on changing step-size in the integration of differential equations using modified divided differences. [21-25] developed and implemented a variable step, variable step size with same order for solving ordinary differential equations. [26] derived the constancy, consistence and convergency of varying K-step methods for numerical integration of large-systems of ordinary differential equations. This research is designed to extend the idea of [21-25] by inventing variable step, variable order together with variable step size for solving stiff systems of ordinary differential equations. [18-19] specifies that this idea leads to better efficiency and accuracy and as well bypass theorem 4. Nevertheless, nonstiff algorithmic program possess a limited region of absolute stability (RAS), whilst stiff algorithmic program possess unlimited RAS. This describes why stiff algorithmic program adapt the usage of large step length beyond the transient (nonstiff) phase. In the transient phase (boundary layer), automatic codification seeks to name the optimal mesh size that holds the local truncation error inside the bound of the observed accuracy. Again, for the transient phase, accuracy necessities oblige a numeric integrator to accept a mesh size of the order of the littlest time constant. Beyond the transient phase (i.e., in the stiff phase) the increase of disseminated errors (instability) checks the selection of mesh size whenever a nonstiff technique is followed, since stability limitations are autonomous of the accuracy necessities. For the transient phase, there is ever the need to accept fairly large mesh size, but this is frequently bounded by stability conditions. See [8] for more info. Theorem 1: The set {๐›ผ1}๐‘–=1 ๐‘š = {( ๐›ผ1๐‘– ๐›ผ2๐‘– โ‹ฎ ๐›ผ๐‘š๐‘– )} ๐‘–=1 ๐‘š is linearly independent if and only if the Wronskian ๐‘Š({๐›ผ1}๐‘–=1 ๐‘š ) = |๐›ผ1๐›ผ2 โ€ฆ๐›ผ๐‘š| = | ๐›ผ11๐›ผ12 โ€ฆ๐›ผ1๐‘š ๐›ผ21๐›ผ22 โ€ฆ๐›ผ2๐‘š โ‹ฎ โ‹ฎ โ‹ฏ โ‹ฎ ๐›ผ๐‘š1๐›ผ๐‘š2 โ€ฆ๐›ผ๐‘š๐‘š | โ‰  0 See [1] for more info. Int. J. Anal. Appl. 19 (6) (2021) 933 Theorem 2: Let ๐‘† = {๐›ผ1}๐‘–=1 ๐‘š = {( ๐›ผ1๐‘– ๐›ผ2๐‘– โ‹ฎ ๐›ผ๐‘š๐‘– )} ๐‘–=1 ๐‘š be a set of ๐‘š linearly independent solutions of ๐‘ฆโ€ฒ = ๐ด๐‘ฆ. Then every solution of ๐‘ฆโ€ฒ = ๐ด๐‘ฆ is a linear combination of these solutions. See [1] for more info. Theorem 3 (Existence and Uniqueness): Assume that each of the functions ๐‘“1(๐‘ฅ,๐‘ฆ1,๐‘ฆ2,โ€ฆ,๐‘ฆ๐‘š), ๐‘“2(๐‘ฅ,๐‘ฆ1,๐‘ฆ2,โ€ฆ,๐‘ฆ๐‘š),โ€ฆ,๐‘“๐‘š(๐‘ฅ,๐‘ฆ1,๐‘ฆ2,โ€ฆ,๐‘ฆ๐‘š) and the partial derivatives ๐œ•๐‘“1 ๐œ•๐‘ฅ1 , ๐œ•๐‘“2 ๐œ•๐‘ฅ2 ,โ€ฆ, ๐œ•๐‘“๐‘š ๐œ•๐‘ฅ๐‘š are continuous in a region ๐‘… containing the point (๐‘ฅ,๐‘ฆ1,๐‘ฆ2,โ€ฆ,๐‘ฆ๐‘š). Then, the initial-value problem { ๐‘ฆ1 โ€ฒ = ๐‘“1(๐‘ฅ,๐‘ฆ1,๐‘ฆ2,โ€ฆ,๐‘ฆ๐‘š) ๐‘ฆ2 โ€ฒ = ๐‘“2(๐‘ฅ,๐‘ฆ1,๐‘ฆ2,โ€ฆ,๐‘ฆ๐‘š) โ‹ฎ ๐‘ฆ๐‘š โ€ฒ = ๐‘“๐‘š(๐‘ฅ,๐‘ฆ1,๐‘ฆ2,โ€ฆ,๐‘ฆ๐‘š) ๐‘ฆ1(๐‘ฅ0) = ๐‘ก1,โ€ฆ ,๐‘ฆ๐‘š(๐‘ฅ0) = ๐‘ก๐‘š (4) has a unique solution { ๐‘ฆ1 = โˆ…1(๐‘ฅ) ๐‘ฆ2 = โˆ…2(๐‘ฅ) โ‹ฎ ๐‘ฆ๐‘š = โˆ…๐‘š(๐‘ฅ) (5) on the interval ๐ผ contanining ๐‘ฅ = ๐‘ฅ0. See [1, 3] for more info. Theorem 4 (Dahlquist Barrier Theorem): An A โˆ’ stable linear multistep method โ€ข must be implicit and โ€ข The most accurate A-stable linear multistep method is the trapezoidal scheme of order ๐‘ = 2 and error constant ๐‘3 = โˆ’ 1 12 . See [8] for more info. The Dahlquist Barrier Theorem 4 can be outwitted by accepting unconventional numeric integrators, some of which are โ€ข nonlinear multistep schemes, โ€ข multiderivative multistep schemes, โ€ข exponentially fitting, and โ€ข extrapolation process. The variable step, variable order of the predictor-corrector algorithmic program came forth as result of the broad computational experience that holds throughout the years. This VSVO-P- Int. J. Anal. Appl. 19 (6) (2021) 934 CAP is fundamental to high level efficiency and accuracy with the potential to change automatically not exclusively the step length but as well the order (and thus the step number) of the techniques utilized. Algorithmic program with such a potential are recognized as variable step, variable order or VSVO, algorithmic program. Apart from been built to handle nonstiff initial value problems, though various subsisting VSVO codification admit alternative for stiff systems. The necessary elements of VSVO algorithmic programs are: โ€ข a family of methods, โ€ข a starting procedure, โ€ข a local error estimator, โ€ข a strategy for determining when to vary step length and/ or order, and โ€ข a technique for varying step length and/ or order โ€ข a written softcode in any mathematical packages is required if manual computation is very tedious. โ€ข a special basis function approximation for estimating stiff systems is necessary if the desired accuracy and efficiency is not achieved. In addition, the convergence attributes of predictor-corrector methods antecedently proved on the presumption of changeless step length and changeless order still maintain in a VSVO algorithmic program conceptualizations. The results of [11-12] indicate that a VSVO algorithmic program established on Adams-Bashforth-Moulton pair in ๐‘ƒ๐ธ๐ถ๐ธ mode (ABM) with step- varying attained by a variable coefficient technique is ever convergent ( as the maximal step length used in the time interval of integration inclines to zero). Whenever an interpolatory technique is utilized then convergence is ensured whenever the step/order-varying technique is such that there subsists a changeless ๐‘ such that in any ๐‘ sequential steps there are ever ๐‘˜ ๐‘ ๐‘ก๐‘’๐‘๐‘  of changeless length considered by the same ๐‘˜๐‘กโ„Ž โˆ’ ๐‘œ๐‘Ÿ๐‘‘๐‘’๐‘Ÿ ABM method, for some value of ๐‘˜. These results stress so far once again that variable coefficient technique, although in general is more costly and cumbersome to carry out, are essentially more effective than interpolatory techniques. See [18-19] for more details. The succeeding sections will demonstrate the usefulness of these strategies. Int. J. Anal. Appl. 19 (6) (2021) 935 II. MATERIALS AND METHODS [4] was evidently the first author to propose a โ€œplacidโ€ conversion of a step-size โ„Ž to novel step-size ๐‘คโ„Ž. [9, 15] unfolded his ideas: we study an arbitrary grid (๐‘ฅ๐‘›) and announce the step sizes by โ„Ž๐‘› = ๐‘ฅ๐‘›+1 โˆ’ ๐‘ฅ๐‘›. We presume that estimations ๐‘ฆ๐‘– to ๐‘ฆ(๐‘ฅ๐‘–) are recognized for ๐‘– = ๐‘› โˆ’ ๐‘˜ + 1,โ€ฆ,๐‘› and insert ๐‘“๐‘–(๐‘ฅ๐‘–,๐‘ฆ๐‘–) and announce ๐‘(๐‘ฅ) as the multinomial which interpolates the values (๐‘ฅ๐‘–,๐‘“๐‘–) for ๐‘– = ๐‘› โˆ’ ๐‘˜ + 1,โ€ฆ,๐‘›. Utilizing Newtonโ€™s interpolation formula we get ๐‘(๐‘ฅ) = โˆ‘ โˆ (๐‘ฅ โˆ’ ๐‘ฅ๐‘›โˆ’๐‘–)๐›ฟ ๐‘–๐‘“[๐‘ฅ๐‘›,๐‘ฅ๐‘›โˆ’1,โ€ฆ,๐‘ฅ๐‘›โˆ’๐‘–] ๐‘–โˆ’1 ๐‘–=0 ๐‘˜โˆ’1 ๐‘–=0 (6) where the divided differences ๐›ฟ๐‘–๐‘“[๐‘ฅ๐‘›,๐‘ฅ๐‘›โˆ’1,โ€ฆ,๐‘ฅ๐‘›โˆ’๐‘–] are specified algorithmic by ๐›ฟ๐‘–๐‘“[๐‘ฅ๐‘›] = ๐‘“๐‘› ๐›ฟ๐‘–๐‘“[๐‘ฅ๐‘›,๐‘ฅ๐‘›โˆ’1,โ€ฆ,๐‘ฅ๐‘›โˆ’๐‘–] = ๐›ฟ๐‘–โˆ’1๐‘“[๐‘ฅ๐‘›,๐‘ฅ๐‘›โˆ’1,โ€ฆ,๐‘ฅ๐‘›โˆ’๐‘–+1]โˆ’๐›ฟ ๐‘–โˆ’1๐‘“[๐‘ฅ๐‘›โˆ’1,โ€ฆ,๐‘ฅ๐‘›โˆ’๐‘–] ๐‘ฅ๐‘›โˆ’๐‘ฅ๐‘›โˆ’๐‘– (7) It is virtual to rescript (6) as ๐‘(๐‘ฅ) = โˆ‘ โˆ ๐‘ฅโˆ’๐‘ฅ๐‘›โˆ’๐‘– ๐‘ฅ๐‘›+1โˆ’๐‘ฅ๐‘›โˆ’๐‘– โˆ™ ฮฆ๐‘– โˆ—(๐‘›)๐‘–โˆ’1๐‘–=0 ๐‘˜โˆ’1 ๐‘–=0 , (8) where ฮฆ๐‘– โˆ—(๐‘›) = โˆ (๐‘ฅ๐‘›+1 โˆ’ ๐‘ฅ๐‘›โˆ’๐‘–) โˆ™ ๐›ฟ ๐‘–๐‘“[๐‘ฅ๐‘›,๐‘ฅ๐‘›โˆ’1,โ€ฆ,๐‘ฅ๐‘›โˆ’๐‘–] ๐‘–โˆ’1 ๐‘–=0 (9) We immediately specify the estimations to ๐‘ฆ(๐‘ฅ๐‘›+1) by ๐‘ฆ๐‘›+1 = ๐‘ฆ๐‘› + โˆซ ๐‘(๐‘ฅ)๐‘‘๐‘ฅ ๐‘ฅ๐‘›+1 ๐‘ฅ๐‘› (10) Replacing equation (6) into (10) we have ๐‘ฆ๐‘›+1 = ๐‘ฆ๐‘› + โ„Ž๐‘› โˆ‘ ๐‘”๐‘–(๐‘›)ฮฆ๐‘– โˆ—(๐‘›)๐‘˜โˆ’1๐‘–=0 (11) with ๐‘”๐‘–(๐‘›) = 1 โ„Ž๐‘› โˆซ โˆ ๐‘ฅโˆ’๐‘ฅ๐‘›โˆ’๐‘– ๐‘ฅ๐‘›+1โˆ’๐‘ฅ๐‘›โˆ’๐‘– ๐‘‘๐‘ฅ๐‘–โˆ’1๐‘–=0 ๐‘ฅ๐‘›+1 ๐‘ฅ๐‘› . (12) Equation (11) is the elongation of the explicit Adams method (1) to variable step sizes [13]. The variable step size implicit Adams methods can be inferred likewise. We assume ๐‘โˆ—(๐‘ฅ) be the multinomial of degree ๐‘˜ that interpolates (๐‘ฅ๐‘–,๐‘“๐‘–) for ๐‘– = ๐‘› โˆ’ ๐‘˜ + 1 (the value ๐‘“๐‘›+1 = ๐‘“(๐‘ฅ๐‘›+1,๐‘ฆ๐‘›+1) comprises the unknown physical solution ๐‘ฆ๐‘›+1). Once more, employing Newtonโ€™s interpolation formula, we get ๐‘โˆ—(๐‘ฅ) = ๐‘(๐‘ฅ) + โˆ (๐‘ฅ โˆ’ ๐‘ฅ๐‘›โˆ’๐‘–) โˆ™ ๐›ฟ ๐‘˜๐‘“[๐‘ฅ๐‘›+1,๐‘ฅ๐‘›,โ€ฆ ,๐‘ฅ๐‘›โˆ’๐‘˜+1] ๐‘–โˆ’1 ๐‘–=0 . The numeric solution specified by Int. J. Anal. Appl. 19 (6) (2021) 936 ๐‘ฆ๐‘›+1 = ๐‘ฆ๐‘› + โˆซ ๐‘ โˆ—(๐‘ฅ)๐‘‘๐‘ฅ ๐‘ฅ๐‘›+1 ๐‘ฅ๐‘› , is established immediately by ๐‘ฆ๐‘›+1 = ๐‘๐‘›+1 + โ„Ž๐‘›๐‘”๐‘˜(๐‘›)ฮฆ๐‘˜(๐‘› + 1) (13) where ๐‘๐‘›+1 is the numeric estimation got by the explicit Adams method ๐‘๐‘›+1 = ๐‘ฆ๐‘› + โ„Ž๐‘› โˆ‘ ๐‘”๐‘–(๐‘›)ฮฆ๐‘– โˆ—(๐‘›) ๐‘˜โˆ’1 ๐‘–=0 and where ฮฆ๐‘˜(๐‘› + 1) = โˆ (๐‘ฅ๐‘›+1 โˆ’ ๐‘ฅ๐‘›โˆ’๐‘–) โˆ™ ๐›ฟ ๐‘˜๐‘“[๐‘ฅ๐‘›+1,๐‘ฅ๐‘›,โ€ฆ ,๐‘ฅ๐‘›โˆ’๐‘˜+1] ๐‘–โˆ’1 ๐‘–=0 . (14) Defining the reoccurrence relations for ๐‘”๐‘–(๐‘›),ฮฆ๐‘–(๐‘›) and ฮฆ๐‘– โˆ—(๐‘›). The price of calculating consolidation coefficients is the greatest weakness to allowing arbitrary fluctuations in the step Size [16]. The values ฮฆ๐‘– โˆ—(๐‘›)(๐‘– = 0,โ€ฆ,๐‘˜ โˆ’ 1) and ฮฆ๐‘˜(๐‘› + 1) can be calculated expeditiously with reoccurrence relations ฮฆ0(๐‘›) = ฮฆ๐‘– โˆ—(๐‘›) = f๐‘› ฮฆ๐‘–+1(๐‘›) = ฮฆ๐‘–(๐‘›)โˆ’ ฮฆ๐‘– โˆ—(๐‘› โˆ’ 1) (15) ฮฆ๐‘– โˆ—(๐‘›) = ๐›ฝ๐‘–(๐‘›)ฮฆ๐‘–(๐‘›), which are an instant effect of (9) and (14). The coefficients ๐›ฝ๐‘–(๐‘›) = โˆ ๐‘ฅ๐‘›+1 โˆ’ ๐‘ฅ๐‘›โˆ’๐‘– ๐‘ฅ๐‘› โˆ’ ๐‘ฅ๐‘›โˆ’๐‘–โˆ’1 ๐‘‘๐‘ฅ ๐‘–โˆ’1 ๐‘–=0 can be computed by ๐›ฝ0(๐‘›) = 1, ๐›ฝ๐‘–(๐‘›) = ๐›ฝ๐‘–โˆ’1(๐‘›) ๐‘ฅ๐‘›+1โˆ’๐‘ฅ๐‘›โˆ’๐‘–+1 ๐‘ฅ๐‘›โˆ’๐‘ฅ๐‘›โˆ’1 . The computation of the coefficients ๐‘”๐‘–(๐‘›) is wilier [17]. We bring in the ๐‘ž โˆ’ ๐‘“๐‘œ๐‘™๐‘‘ integral ๐‘๐‘–๐‘ž(๐‘ฅ) = (๐‘žโˆ’1)! โ„Ž๐‘› ๐‘ž โˆซ โ€ฆ ๐‘ฅ ๐‘ฅ๐‘› โˆซ โˆ ๐œ–0โˆ’๐‘ฅ๐‘›โˆ’๐‘– ๐‘ฅ๐‘›+1โˆ’๐‘ฅ๐‘›โˆ’๐‘– ๐‘‘๐œ–0 โ€ฆ ๐‘–โˆ’1 ๐‘–=0 ๐œ–๐‘žโˆ’1 ๐‘ฅ๐‘› ๐‘‘๐œ–๐‘žโˆ’1 (16) and remark that ๐‘”๐‘–(๐‘›) = ๐‘๐‘–๐‘ž(๐‘ฅ๐‘›+1) [13]. A. Theoretical Analysis of the Method Definition 8: The order of the operator ๐ฟโ„Ž is the highest ๐‘Ÿ such that whenever ๐‘ฆ(๐‘ฅ) possesses a continuous (๐‘Ÿ + 1)๐‘กโ„Ž the derivative, then Int. J. Anal. Appl. 19 (6) (2021) 937 ๐ฟโ„Ž(๐‘ฆ(๐‘ฅ)) = 0(โ„Ž ๐‘Ÿ+1). (17) Whenever we presume a continuous (๐‘Ÿ + 2)๐‘กโ„Ž derivative for ๐‘ฆ, then we can replace the Taylorโ€™s series for ๐‘ฆ and ๐‘ฆโ€ฒ with 0(โ„Ž๐‘Ÿ+1) remains. Whenever the terms in โ„Ž0,โ„Ž2,โ€ฆ,โ„Ž๐‘Ÿ+1 are assembled unitedly, we will arrive at ๐ฟโ„Ž(๐‘ฆ(๐‘ฅ)) = โˆ‘ ๐ถ๐‘ž ๐‘Ÿ+1 ๐‘ž=0 โ„Ž๐‘ž๐‘ฆ(๐‘ž)(๐‘ฅ) + 0(โ„Ž๐‘Ÿ+1) where ๐ถ๐‘ž = { โˆ‘ ๐›ผ๐‘– ๐‘˜ ๐‘–=0 ๐‘ž = 0 โˆ‘ [ (โˆ’๐‘–)๐‘ž ๐‘ž! ๐›ผ๐‘– โˆ’ (โˆ’๐‘–)๐‘žโˆ’1 (๐‘žโˆ’1)! ๐›ฝ๐‘–] ๐‘ž > 0 ๐‘˜ ๐‘–=0 (18) The linear equations ๐ถ๐‘ž = 0,๐‘ž โ‰ค ๐‘Ÿ, are the equations which decides an ๐‘Ÿ๐‘กโ„Ž order method. Given the number of truncation error put in for each one step is ๐ถ๐‘Ÿ+1 โˆ‘ ๐›ฝ๐‘– ๐‘˜ ๐‘–=0 โ„Ž๐‘Ÿ+1๐‘ฆ(๐‘Ÿ+1) + 0(โ„Ž๐‘Ÿ+2). Therefore the natural standization is to take โˆ‘ ๐›ฝ๐‘– ๐‘˜ ๐‘–=0 = 1 [10]. (19) Theorem 5: Whenever the multi-step method (29) is unchanging and of order 1 so it is convergent. Whenever the method (29) is unchanging and of order ๐‘ so it is convergent of order ๐‘ [13]. Theorem 6: The multi-step method (29) is of order ๐‘, whenever one and only of the following tantamount precondition is true conditions is met: (i) โˆ‘ ๐›ผ๐‘– ๐‘˜ ๐‘–=0 = 0 and โˆ‘ ๐›ผ๐‘– ๐‘˜ ๐‘–=0 ๐‘– ๐‘ž = ๐‘žโˆ‘ ๐›ฝ๐‘–๐‘– ๐‘žโˆ’1๐‘˜ ๐‘–=0 for ๐‘ž = 1,โ€ฆ,๐‘; (ii) ๐œš(๐‘’โ„Ž)โˆ’ โ„Ž๐œŽ(๐‘’โ„Ž) = ๐‘‚(โ„Ž๐‘+1) for โ„Ž โ†’ 0; (iii) ๐œš(๐œ) ๐‘™๐‘œ๐‘”๐œ โˆ’ ๐œŽ(๐œ) = ๐‘‚((๐œ โˆ’ 1)๐‘) for ๐œ โ†’ 1. Where the linear difference operator ๐ฟ specified by ๐ฟ(๐‘ฆ,๐‘ฅ,โ„Ž) = โˆ‘ (๐›ผ๐‘–๐‘ฆ(๐‘ฅ + ๐‘–โ„Ž) โˆ’ h๐›ฝ๐‘–๐‘ฆ โ€ฒ(๐‘ฅ + ๐‘–โ„Ž))๐‘˜๐‘–=0 [13]. (20) Proof Enlarging ๐‘ฆ(๐‘ฅ + ๐‘–โ„Ž) and ๐‘ฆโ€ฒ(๐‘ฅ + ๐‘–โ„Ž) using Taylor series and inserting the truncated series. Put the Taylor series expansion into (20) gives ๐ฟ(๐‘ฆ,๐‘ฅ,โ„Ž) = โˆ‘ (โˆ‘ ๐‘–๐‘ž ๐‘ž!๐‘žโ‰ฅ0 โ„Ž๐‘ž๐‘ฆ(๐‘ž)(๐‘ฅ) โˆ’ โ„Ž๐›ฝ๐‘– โˆ‘ ๐‘–๐‘Ÿ ๐‘Ÿ!๐‘Ÿโ‰ฅ0 โ„Ž๐‘Ÿ๐‘ฆ(๐‘Ÿ+1)(๐‘ฅ))๐‘˜๐‘–=0 (21) ๐‘ฆ(๐‘ฅ)โˆ‘ ๐›ผ๐‘– ๐‘˜ ๐‘–=0 + โˆ‘ โ„Ž๐‘ž ๐‘ž!๐‘žโ‰ฅ1 ๐‘ฆ(๐‘ž)(๐‘ฅ)(โˆ‘ ๐›ผ๐‘–๐‘– ๐‘ž โˆ’ ๐‘žโˆ‘ ๐›ฝ๐‘–๐‘– ๐‘žโˆ’1๐‘˜ ๐‘–=0 ๐‘˜ ๐‘–=0 ). Int. J. Anal. Appl. 19 (6) (2021) 938 This means the par of precondition (i) with ๐ฟ(๐‘ฆ,๐‘ฅ,โ„Ž) = ๐‘‚(โ„Ž๐‘+1) for entirely sufficient degree of regular functions ๐‘ฆ(๐‘ฅ). We will continue to express that the three precondition of proposition 1 are tantamount. The individuality operator ๐ฟ(๐‘’๐‘ฅ๐‘,0,โ„Ž) = ๐œš(๐‘’โ„Ž) โˆ’ โ„Ž๐œŽ(๐‘’โ„Ž) where ๐‘’๐‘ฅ๐‘ announces the exponential function, unitedly with ๐ฟ(๐‘’๐‘ฅ๐‘,0,โ„Ž) = โˆ‘ ๐›ผ๐‘– ๐‘˜ ๐‘–=0 + โˆ‘ โ„Ž๐‘ž ๐‘ž!๐‘žโ‰ฅ1 (โˆ‘ ๐›ผ๐‘–๐‘– ๐‘ž โˆ’ ๐‘žโˆ‘ ๐›ฝ๐‘–๐‘– ๐‘žโˆ’1๐‘˜ ๐‘–=0 ๐‘˜ ๐‘–=0 ), which succeeds from (21), proves the par of the preconditions (i) and (ii). By use of the translation ๐œ = ๐‘’โ„Ž (๐‘œ๐‘Ÿ โ„Ž = ๐‘™๐‘œ๐‘”๐œ) precondition (ii) can be spelt in the pattern ๐œš(๐œ) โˆ’ ๐‘™๐‘œ๐‘”๐œ.๐œŽ(๐œ) = ๐‘‚((๐‘™๐‘œ๐‘”๐œ)๐‘+1) for ๐œ โ†’ 1. But this precondition is par to (iii), since ๐‘™๐‘œ๐‘”๐œ = (๐œ โˆ’ 1) + ๐‘‚((๐œ โˆ’ 1)2) for ๐œ โ†’ 1. See [13] for more info. B. Convergence Convergence for variable step size Adams method was first considered by [26]. In order to show convergence for the general case we present the vector ๐‘Œ๐‘› = (๐‘ฆ๐‘›+๐‘˜โˆ’1,โ€ฆ ,๐‘ฆ๐‘›+1,๐‘ฆ๐‘›) ๐‘‡. The method ๐‘ฆ๐‘›+๐‘˜ + โˆ‘ ๐›ผ๐‘–๐‘ฆ๐‘›+๐‘– = โ„Ž๐‘›+๐‘˜โˆ’1 ๐‘˜โˆ’1 ๐‘–=0 โˆ‘ ๐›ฝ๐‘–๐‘›๐‘“๐‘›+๐‘– ๐‘˜ ๐‘–=0 (22) then turns tantamount to ๐‘Œ๐‘›+1 = (๐ด๐‘› โŠ— ๐ผ)๐‘Œ๐‘› + โ„Ž๐‘›+๐‘˜โˆ’1ฮฆ๐‘›(๐‘ฅ๐‘›,๐‘Œ๐‘›,โ„Ž๐‘›) , (23) where ๐ด๐‘› is established by ๏ƒท ๏ƒท ๏ƒท ๏ƒท ๏ƒท ๏ƒท ๏ƒธ ๏ƒถ ๏ƒง ๏ƒง ๏ƒง ๏ƒง ๏ƒง ๏ƒง ๏ƒจ ๏ƒฆ = โˆ’โˆ’โˆ’ โˆ’ 01 001 00...01 ...... ,0,1,1 A ๏๏๏๏ ๏ก๏ก๏ก nnnk n (24) the comrade matrix and ฮฆ๐‘›(๐‘ฅ๐‘›,๐‘Œ๐‘›,โ„Ž๐‘›) = (๐‘’1 โŠ— ๐ผ)ฮจ๐‘›(๐‘ฅ๐‘›,๐‘Œ๐‘›,โ„Ž๐‘›). The value ฮจ = ฮจ๐‘›(๐‘ฅ๐‘›,๐‘Œ๐‘›,โ„Ž๐‘›) is specified without any doubt by ฮจ = ๐‘ฆ๐‘›+๐‘˜ + โˆ‘ ๐›ฝ๐‘–๐‘›๐‘“(๐‘ฅ๐‘›+๐‘–,๐‘ฆ๐‘›+๐‘–) + ๐›ฝ๐‘˜๐‘›๐‘“(๐‘ฅ๐‘›+๐‘˜,โ„Žฮจ โˆ’ ๐‘˜โˆ’1 ๐‘–=0 โˆ‘ ๐›ผ๐‘–๐‘›๐‘ฆ๐‘›+๐‘– ๐‘˜ ๐‘–=0 . We advance by announcing Int. J. Anal. Appl. 19 (6) (2021) 939 ๐‘Œ(๐‘ฅ๐‘›) = (๐‘ฆ(๐‘ฅ๐‘›+๐‘˜โˆ’1),โ€ฆ,๐‘ฆ(๐‘ฅ๐‘›+1),๐‘ฆ(๐‘ฅ๐‘›)) ๐‘‡ the precise values to be estimated by ๐‘Œ๐‘› . The convergence theorem can immediately be developed as succeeds. See [13] for more info. Theorem 7: Assume that โ€ข the method (27) is stable of order ๐‘, and has bounded coefficients ๐›ผ๐‘–๐‘› and ๐›ฝ๐‘–๐‘›; โ€ข the starting values satisfy โ€–๐‘Œ(๐‘ฅ๐‘›) โˆ’ ๐‘Œ0โ€– = ๐‘‚(โ„Ž0 ๐‘ ); โ€ข the step size ratios are bounded ( โ„Ž๐‘› โ„Ž๐‘›โˆ’1 โ‰ค ฮฉ). Then the method is convergent of order ๐‘, i.e., for each differential equation ๐‘ฆโ€ฒ = ๐‘“(๐‘ฅ,๐‘ฆ),๐‘ฆ(๐‘ฅ0) = ๐‘ฆ0 with ๐‘“ sufficiently differentiable the global error satisfies โ€–๐‘Œ(๐‘ฅ๐‘›) โˆ’ ๐‘ฆ๐‘›โ€– โ‰ค ๐ถโ„Ž ๐‘ for ๐‘ฅ๐‘› โ‰ค ๐‘ฅ, where โ„Ž = ๐‘š๐‘Ž๐‘ฅโ„Ž๐‘–. See [13] for more info. Proof Because the approach has order p and the physical coefficients and step-size ratios are restricted, the expression becomes ๐‘ฆ(๐‘ฅ๐‘›+๐‘˜)+ โˆ‘ ๐›ผ๐‘–๐‘›๐‘ฆ(๐‘ฅ๐‘›+๐‘–) โˆ’ โ„Ž๐‘›+๐‘˜โˆ’1 ๐‘˜โˆ’1 ๐‘–=0 โˆ‘ ๐›ฝ๐‘–๐‘›๐‘ฆ โ€ฒ(๐‘ฅ๐‘›+๐‘–) = ๐‘˜ ๐‘–=0 ๐‘‚(โ„Ž๐‘› ๐‘+1 ), We justify that the truncated-local error ๐›ฟ๐‘›+1 = ๐‘Œ(๐‘ฅ๐‘›+1)โˆ’ (๐ด๐‘› โŠ— ๐ผ)๐‘Œ(๐‘ฅ๐‘›) โˆ’ โ„Ž๐‘›+๐‘˜โˆ’1ฮฆ๐‘›(๐‘ฅ๐‘›,๐‘Œ(๐‘ฅ๐‘›),โ„Ž๐‘›) (25) gratifies ๐›ฟ๐‘›+1 = ๐‘‚(โ„Ž๐‘› ๐‘+1 ). (26) Deducting (23) from (25) we get ๐‘Œ(๐‘ฅ๐‘›+1)โˆ’ ๐‘Œ๐‘›+1 = (๐ด๐‘› โŠ— ๐ผ)๐‘Œ(๐‘ฅ๐‘›) โˆ’ ๐‘Œ๐‘›) +โ„Ž๐‘›+๐‘˜โˆ’1(ฮฆ๐‘›(๐‘ฅ๐‘›,๐‘Œ(๐‘ฅ๐‘›),โ„Ž๐‘›)โˆ’ ฮฆ๐‘›(๐‘ฅ๐‘›,๐‘Œ๐‘›,โ„Ž๐‘›)) + ๐›ฟ๐‘›+1 and by induction it succeeds that ๐‘Œ(๐‘ฅ๐‘›+1) โˆ’ ๐‘Œ๐‘›+1 = (๐ด๐‘› โ€ฆ.๐ด0) โŠ— ๐ผ)๐‘Œ(๐‘ฅ0) โˆ’ ๐‘Œ0) + โˆ‘ โ„Ž๐‘–+๐‘˜โˆ’1 ๐‘› ๐‘–=0 (๐ด๐‘› โ€ฆ.๐ด๐‘–+1)โŠ— ๐ผ)(ฮฆ๐‘–(๐‘ฅ๐‘–,๐‘Œ(๐‘ฅ๐‘–),โ„Ž๐‘–) โˆ’ ฮฆ๐‘–(๐‘ฅ๐‘–,๐‘Œ๐‘–,โ„Ž๐‘–)) + โˆ‘ (๐ด๐‘› โ€ฆ.๐ด๐‘–+1) โŠ— ๐ผ) ๐‘› ๐‘–=0 ๐›ฟ๐‘–+1. As in the proof of theorem 1, we derive that the ฮฆ๐‘› gratifies a uniform Lipschitz precondition with respect to ๐‘Œ๐‘›. This unitedly with stability and (26), means that โ€–๐‘Œ(๐‘ฅ๐‘›+1) โˆ’ ๐‘Œ๐‘›+1โ€– โ‰ค โˆ‘ โ„Ž๐‘–+๐‘˜โˆ’1 ๐‘› ๐‘–=0 ๐ฟโ€–๐‘Œ(๐‘ฅ๐‘–) โˆ’ ๐‘Œ๐‘–โ€– + ๐ถ1โ„Ž ๐‘. Int. J. Anal. Appl. 19 (6) (2021) 940 To figure out this difference, we bring in the succession { ๐‘›} specified as 0 = โ€–๐‘Œ(๐‘ฅ0) โˆ’ ๐‘Œ0โ€–, ๐‘›+1 = โˆ‘ โ„Ž๐‘–+๐‘˜โˆ’1 ๐‘› ๐‘–=0 ๐ฟ ๐‘– + ๐ถ1โ„Ž ๐‘. (27) A simple induction statement proves that โ€–๐‘Œ(๐‘ฅ๐‘›) โˆ’ ๐‘Œ๐‘›โ€– โ‰ค ๐‘›. (28) From (27) we get for ๐‘› โ‰ฅ 1 ๐‘›+1 = ๐‘› + โ„Ž๐‘›+๐‘˜โˆ’1๐ฟ ๐‘› โ‰ค exp (๐ฟโ„Ž๐‘›+๐‘˜โˆ’1) ๐‘› so that in addition ๐‘› โ‰ค exp(๐‘ฅ โˆ’ ๐‘ฅ0)๐ฟ) 1 = exp(๐‘ฅ โˆ’ ๐‘ฅ0)๐ฟ) โˆ™ (โ„Ž๐‘˜โˆ’1๐ฟโ€–๐‘Œ(๐‘ฅ0) โˆ’ ๐‘Œ0โ€– + ๐ถ1โ„Ž ๐‘). The inequality unitedly with (28) completes the proof of theorem 7. See [13] for more info. C. Implementing the Convergence Criteria of Variable Step, Variable Order The use of Milneโ€™s estimate for the principal local truncation error necessitate that predictor-corrector method possess like order. This is attained by accepting the predictor to be a ๐‘˜ โˆ’ ๐‘ ๐‘ก๐‘’๐‘ Adams Bashforth method and the corrector to be a (๐‘˜ โˆ’ 1) โˆ’ ๐‘ ๐‘ก๐‘’๐‘ Adams-Moulton, both then possess order ๐‘ = ๐‘˜. The ๐‘˜ โˆ’ ๐‘ ๐‘ก๐‘’๐‘ ๐‘˜๐‘กโ„Ž order ABM pair is therefore ๐‘ฆ๐‘›+1 = ๐‘ฆ๐‘› + โ„Žโˆ‘ ๐›พ๐‘– โˆ—โˆ‡๐‘–๐‘“๐‘›, ๐‘˜โˆ’1 ๐‘–=0 ๐‘ โˆ— = ๐‘˜, ๐ถ๐‘˜+1 โˆ— = ๐›พ๐‘˜ โˆ— ๐‘ฆ๐‘›+1 = ๐‘ฆ๐‘› + โ„Žโˆ‘ ๐›พ๐‘–โˆ‡ ๐‘–๐‘“๐‘›+1, ๐‘˜โˆ’1 ๐‘–=0 ๐‘ = ๐‘˜, ๐ถ๐‘˜+1 = ๐›พ๐‘˜ }๐‘˜ = 1,2,โ€ฆ (29) Whenever we imagine (29) being employed in ๐‘ƒ(๐ธ๐ถ)๐œ‡๐ธ1โˆ’๐‘ก mode then, in the second of (29), ๐‘ฆ๐‘›+1 will be substituted by ๐‘ฆ๐‘›+1 [๐‘ฃ+1] , and the one value ๐‘“๐‘›+1 on the right side by ๐‘“๐‘›+1 [๐‘ฃ] , the leftover values ๐‘“๐‘›โˆ’๐‘— being substituted by ๐‘“๐‘›โˆ’๐‘— [๐œ‡โˆ’๐‘ก] , ๐‘— = 0,1,โ€ฆ,๐‘˜ โˆ’ 1. We can surmount this trouble by defining โˆ‡๐‘ฃ ๐‘– ๐‘“๐‘›+1 [๐œ‡] to be โˆ‡๐‘–๐‘“๐‘›+1 [๐œ‡] with the one value ๐‘“๐‘›+1 [๐œ‡] substituted by ๐‘“๐‘›+1 [๐‘ฃ] . That is, โˆ‡๐‘ฃ ๐‘– ๐‘“๐‘›+1 [๐œ‡] = โˆ‡๐‘–๐‘“๐‘›+1 [๐œ‡] + ๐‘“๐‘›+1 [๐‘ฃ] โˆ’ ๐‘“๐‘›+1 [๐œ‡] (30) We may rewrite (30) in the form โˆ‘ (๐›พ๐‘–โˆ‡ ๐‘–๐‘“๐‘›+1 [๐œ‡] โˆ’ ๐›พ๐‘– โˆ—โˆ‡๐‘–๐‘“๐‘› [๐œ‡] ) = ๐›พ๐‘˜โˆ’1 โˆ— ๐‘“๐‘›+1 [๐œ‡]๐‘˜โˆ’1 ๐‘–=0 (31) where the notational system is specified by (29). We immediately employ the pair (29) in ๐‘ƒ(๐ธ๐ถ)๐œ‡๐ธ1โˆ’๐‘ก mode, and utilize the structure of the Adams methods to formulate a form of ABM method which is computationally commodious and frugal. The mode is specified by ๐‘ƒ: ๐‘ฆ๐‘›+1 [0] = ๐‘ฆ๐‘› [๐œ‡] + โ„Žโˆ‘ ๐›พ๐‘– โˆ—โˆ‡๐‘–๐‘“๐‘› [๐œ‡โˆ’1]๐‘˜โˆ’1 ๐‘–=0 (32) Int. J. Anal. Appl. 19 (6) (2021) 941 (๐ธ๐ถ)๐œ‡ ๐‘“๐‘›+1 [๐‘ฃ] = ๐‘“(๐‘ฅ๐‘›+1,๐‘ฆ๐‘›+1 [๐‘ฃ] ) ๐‘ฆ๐‘›+1 [๐‘ฃ+1] = ๐‘ฆ๐‘› [๐œ‡] + โ„Žโˆ‘ ๐›พ๐‘–โˆ‡๐‘ฃ ๐‘– ๐‘“๐‘›+1 [๐œ‡โˆ’1]๐‘˜โˆ’1 ๐‘–=0 } ๐‘ฃ = 0,1,โ€ฆ,๐œ‡ โˆ’ 1 (33) (๐ธ1โˆ’๐‘ก ๐‘“๐‘›+1 [๐œ‡] = ๐‘“(๐‘ฅ๐‘›+1,๐‘ฆ๐‘›+1 [๐œ‡] ) whenever ๐‘ก = 0. To employ the Milne estimate, we require calculating ๐‘ฆ๐‘›+1 [๐œ‡] โˆ’ ๐‘ฆ๐‘›+1 [0] . Deducting (32) from (33) with ๐‘ฃ = ๐œ‡ โˆ’ 1 establishes ๐‘ฆ๐‘›+1 [๐œ‡] โˆ’ ๐‘ฆ๐‘›+1 [0] = โ„Žโˆ‘ (๐›พ๐‘–โˆ‡๐œ‡โˆ’1 ๐‘– ๐‘“๐‘›+1 [๐œ‡โˆ’๐‘ก] โˆ’ ๐›พ๐‘– โˆ—โˆ‡๐‘–๐‘“๐‘› [๐œ‡โˆ’๐‘ก] ) ๐‘˜โˆ’1 ๐‘–=0 = โ„Ž๐›พ๐‘– โˆ—โˆ‡๐œ‡โˆ’1 ๐‘˜ ๐‘“๐‘›+1 [๐œ‡โˆ’๐‘ก] . Since ๐ถ๐‘˜+1 โˆ— = ๐›พ๐‘˜ โˆ— and ๐ถ๐‘˜+1 = ๐›พ๐‘˜, the Milne estimate ๐‘Š = ๐ถ๐‘+1 ๐ถ๐‘+1 โˆ— โˆ’ ๐ถ๐‘+1 for the principal local truncation error at ๐‘ฅ๐‘›+1 (which we shall denote by ๐‘‡๐‘›+1) is established by ๐‘‡๐‘›+1 = ๐ถ๐‘+1 ๐ถ๐‘+1 โˆ— โˆ’ ๐ถ๐‘+1 ( ๐‘ฆ๐‘›+1 [๐œ‡] โˆ’ ๐‘ฆ๐‘›+1 [0] ) = ๐›พ๐‘˜ ๐›พ๐‘˜ โˆ—โˆ’๐›พ๐‘˜ = โ„Ž๐›พ๐‘– โˆ—โˆ‡๐œ‡โˆ’1 ๐‘˜ ๐‘“๐‘›+1 [๐œ‡โˆ’๐‘ก] . Whenever ๐›พ๐‘˜ โˆ— โˆ’ ๐›พ๐‘˜ = ๐›พ๐‘˜โˆ’1 โˆ— , wherefrom ๐‘‡๐‘›+1 == โ„Ž๐›พ๐‘– โˆ—โˆ‡๐œ‡โˆ’1 ๐‘˜ ๐‘“๐‘›+1 [๐œ‡โˆ’๐‘ก] . See [2, 6-7, 18-19, 21-25] for more info. III. Practical Examples of Stiff Systems of First Order ODEs We are interested with the computation interpretation of these attributes. A problem is stiff whenever the analytical solution being looked for changes tardily, but there are close solutions that change speedily, so the numerical approach must consider small steps size to get acceptable results. See [20] for more info. Application Problem 1 An Engineering Example: In chemical engineering, a complicated production activity may involve several reactors connected with inflow and outflow pipes. If there are n reactors, the whole process is governed by a system of ๐‘› differential equations of the form [ ๐‘ฅโ€ฒ(๐‘ก) ๐‘ฆโ€ฒ(๐‘ก) ๐‘งโ€ฒ(๐‘ก) ] = [ โˆ’ 8 3 โˆ’ 4 3 1 โˆ’ 17 3 โˆ’ 4 3 1 โˆ’ 35 3 14 3 โˆ’2] [ ๐‘ฅ1 ๐‘ฅ2 ๐‘ฅ3 ] + [ 12 29 48 ] Analytical Solution Int. J. Anal. Appl. 19 (6) (2021) 942 ๐‘ฅ(๐‘ก) = 1 6 ๐‘’โˆ’3๐‘ก(6 โˆ’ 50๐‘’๐‘ก + 10๐‘’2๐‘ก + 34๐‘’3๐‘ก), ๐‘ฆ(๐‘ก) = 1 6 ๐‘’โˆ’3๐‘ก(12 โˆ’ 125๐‘’๐‘ก + 40๐‘’2๐‘ก + 73๐‘’3๐‘ก), ๐‘ง(๐‘ก) = 1 6 ๐‘’โˆ’3๐‘ก(14 โˆ’ 200๐‘’๐‘ก + 70๐‘’2๐‘ก + 116๐‘’3๐‘ก). See [5] for more info. Application Problem 2 We examine the coefficient matrix A with the initial conditions ๐‘ฆ(0) and the forcing function ๐‘”(๐‘ก) given by ๐ด = [ 0 โˆ’1 1 0 2 0 โˆ’2 โˆ’1 3 ], ๐‘ฆ(0) = [ 0 0 1 ], ๐‘”(๐‘ก) = [ 1 ๐‘ก ๐‘’โˆ’๐‘ก ]. Analytical Solution ๐‘ฆ(๐‘ก) = [ 7 12 ๐‘’2๐‘ก + 3 2 ๐‘’๐‘ก + ๐‘’โˆ’๐‘ก 6 โˆ’ ๐‘ก 2 โˆ’ 9 4 ๐‘’2๐‘ก 4 โˆ’ ๐‘ก 2 โˆ’ 1 4 17 12 ๐‘’2๐‘ก + 3 2 โˆ’ ๐‘’โˆ’๐‘ก 6 โˆ’ ๐‘ก 2 โˆ’ 7 4 ] . See [28] for more info. Application Problem 3 We study the initial value problem ๐‘ฆโ€ฒ(๐‘ก) = ๐ด๐‘ฆ(๐‘ก),๐‘ฆ(0) = [1,0,โˆ’1]๐‘‡, where ๐ด = [ โˆ’21 19 โˆ’20 19 โˆ’21 20 40 โˆ’40 โˆ’40 ]. Analytical Solution ๐‘ฆ(๐‘ก) = [ 1 2 ๐‘’โˆ’2๐‘ก + 1 2 ๐‘’โˆ’40๐‘ก(๐‘๐‘œ๐‘ (40๐‘ก) + sin(40๐‘ก))) 1 2 ๐‘’โˆ’2๐‘ก โˆ’ 1 2 ๐‘’โˆ’40๐‘ก(๐‘๐‘œ๐‘ (40๐‘ก) + sin(40๐‘ก))) โˆ’๐‘’โˆ’40๐‘ก + (๐‘๐‘œ๐‘ (40๐‘ก) โˆ’ sin(40๐‘ก))) ] . See [18] for more info. Int. J. Anal. Appl. 19 (6) (2021) 943 IV. RESULTS This aspect implements the computational strategy of variable step, variable order for solving stiff systems of ordinary differential equations. The different application problems of stiff system of ordinary differential equations were implemented on diverse convergence criteria of the following: 10โˆ’3,10โˆ’3,10โˆ’4,10โˆ’5,10โˆ’6,10โˆ’7,10โˆ’8,10โˆ’9,10โˆ’10 ๐‘Ž๐‘›๐‘‘ 10โˆ’11 . The results of the application problems 1, 2 and 3 are displayed on tables 1, 2 and 3. TABLE I. CSVSVO IMPLEMENTATION ๐‘€๐‘’๐‘š๐‘๐‘™๐‘œ๐‘ฆ๐‘’๐‘‘ ๐‘€๐‘Ž๐‘ฅ๐‘’๐‘Ÿ๐‘Ÿ๐‘œ๐‘Ÿ๐‘  ๐ถ๐‘œ๐‘›๐‘ฃ๐‘๐‘Ÿ๐‘–๐‘ก๐‘’๐‘Ÿ๐‘–๐‘Ž CSVSVO 6.73296E-05 10โˆ’2 CSVSVO 1.50195E-01 10โˆ’2 CSVSVO 2.30889E-01 10โˆ’2 CSVSVO 7.17852E-09 10โˆ’4 CSVSVO 1.81011E-03 10โˆ’4 CSVSVO 3.07211E-03 10โˆ’4 CSVSVO 7.17852E-13 10โˆ’6 CSVSVO 1.84418E-05 10โˆ’6 CSVSVO 3.16097E-05 10โˆ’6 CSVSVO 6.24284E-16 10โˆ’8 CSVSVO 1.84762E-07 10โˆ’8 CSVSVO 3.17E-07 10โˆ’8 CSVSVO 1.96457E-16 10โˆ’10 CSVSVO 1.84796E-09 10โˆ’10 CSVSVO 3.17E-09 10โˆ’10 Int. J. Anal. Appl. 19 (6) (2021) 944 TABLE II. CSVSVO IMPLEMENTATION ๐‘€๐‘’๐‘š๐‘๐‘™๐‘œ๐‘ฆ๐‘’๐‘‘ ๐‘€๐‘Ž๐‘ฅ๐‘’๐‘Ÿ๐‘Ÿ๐‘œ๐‘Ÿ๐‘  ๐ถ๐‘œ๐‘›๐‘ฃ๐‘๐‘Ÿ๐‘–๐‘ก๐‘’๐‘Ÿ๐‘–๐‘Ž CSVSVO 1.33456E-04 10โˆ’3 CSVSVO 1.10578E-02 10โˆ’3 CSVSVO 3.81005E-02 10โˆ’3 CSVSVO 1.43869E-08 10โˆ’5 CSVSVO 2.10822E-04 10โˆ’5 CSVSVO 4.47082E-04 10โˆ’5 CSVSVO 1.43996E-12 10โˆ’7 CSVSVO 2.22913E-06 10โˆ’7 CSVSVO 4.47991E-06 10โˆ’7 CSVSVO 1.11022E-16 10โˆ’9 CSVSVO 2.24143E-08 10โˆ’9 CSVSVO 4.48E-08 10โˆ’9 CSVSVO 1.11022E-16 10โˆ’11 CSVSVO 2.24226E-10 10โˆ’11 CSVSVO 4.48E-10 10โˆ’11 Int. J. Anal. Appl. 19 (6) (2021) 945 TABLE III. CSVSVO IMPLEMENTATION ๐‘€๐‘’๐‘š๐‘๐‘™๐‘œ๐‘ฆ๐‘’๐‘‘ ๐‘€๐‘Ž๐‘ฅ๐‘’๐‘Ÿ๐‘Ÿ๐‘œ๐‘Ÿ๐‘  ๐ถ๐‘œ๐‘›๐‘ฃ๐‘๐‘Ÿ๐‘–๐‘ก๐‘’๐‘Ÿ๐‘–๐‘Ž CSVSVO 6.73296E-05 10โˆ’2 CSVSVO 1.50195E-01 10โˆ’2 CSVSVO 2.30889E-01 10โˆ’2 CSVSVO 7.17852E-09 10โˆ’4 CSVSVO 1.81011E-03 10โˆ’4 CSVSVO 3.07211E-03 10โˆ’4 CSVSVO 7.17852E-13 10โˆ’6 CSVSVO 1.84418E-05 10โˆ’6 CSVSVO 3.16097E-05 10โˆ’6 CSVSVO 6.24284E-16 10โˆ’8 CSVSVO 1.84762E-07 10โˆ’8 CSVSVO 3.17E-07 10โˆ’8 CSVSVO 1.96457E-16 10โˆ’10 CSVSVO 1.84796E-09 10โˆ’10 CSVSVO 3.17E-09 10โˆ’10 V. CONCLUSION Applications problem 1, 2 and 3 represents Stiff systems of ordinary differential equations which seems to generates an unstable system behavior, and as such requires a technical approach like CSVSVO to guarantee an improve efficiency and better accuracy. The stiff systems of ordinary differential equations are carried out employing the CSVSVO implementation. The CSVSVO has the capacity to introduce the convergence criteria in order to engender the desired result is achieved. These convergence criteria decide whether the result is accepted or rejected. The CSVSVO implementation is done utilizing a multiprocessor approach executed under the Mathematica Software platform. Tables 1, 2 and 3 displays the Int. J. Anal. Appl. 19 (6) (2021) 946 computational results established that the CSVSVO is reached via the convergence criteria of the following: 10โˆ’3,10โˆ’3,10โˆ’4,10โˆ’5,10โˆ’6,10โˆ’7,10โˆ’8,10โˆ’9 ,10โˆ’10 ๐‘Ž๐‘›๐‘‘ 10โˆ’11. In addition, with the trend of the maximum errors achieved via the different convergence criteria, we can conclude that the CSVSVO is capable of resolving stiff systems of ordinary differential equations with better efficiency and accuracy as exhibited in Tables 1, 2 and 3. See [5, 18, 21โ€”25, 28] for details. Acknowledgements: The authors would like to appreciate Covenant University for providing sponsorship throughout the study period of time. Many thanks to the anonymous reviewers for their continuous contribution. Conflicts of Interest: The author(s) declare that there are no conflicts of interest regarding the publication of this paper. REFERENCES [1] M. L. Abell, J. P. Braselton, Differential Equations with Mathematica, Elsevier Academic Press, USA, 2004 [2] U. M. Ascher, L. R. Petzoid, Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations, SIAM, USA, 1998. [3] K. Atkinson, W. Han, D. Stewart, Numerical solution of ordinary differential equations, John Wiley & Sons, Inc., New Jersey, 2009. [4] F. Ceschino, Modification de la longueur du pas dans lโ€™ integration numerique par les methods a pas lies, Chifres, Vol. 2, 101-106, 1961. [5] W. Cheney, D. Kingaid, Numerical Mathematics and Computing, Thomson Brooks/Cole, USA, 2008. [6] J. R. Dormand, Numerical Methods for Differential Equations, CRC Press, New York, 1996. [7] J. D. Faires, R. L. Burden, Initial-Value Problems for ODEs, Dublin City University, 2012. [8] S. O. Fatunla, Numerical Methods for Initial Value Problems in Ordinary Differential Equations, Academic Press, Inc., New York, 1988. [9] C. V. D. Forrington, Extensions of the predictor-corrector method for the solution of systems of ordinary differential equations, Comput. J. 4 (1961), 80-84. Int. J. Anal. Appl. 19 (6) (2021) 947 [10] C. W. Gear, Numerical Value Problems in ODEs, Prentice-Hall, Inc., New Jersey, USA, 1971. [11] C. W. Gear, K. W. Tu, The effect of variable mesh size on the stability of multistep methods, SIAM J. Numer. Anal. 11 (1974), 1025-1043. [12] C. W. Gear, D. S. Watanabe, Stability and convergence of variable order multistep methods, SIAM J. Numer. Anal. 11 (1974), 1044-1058. [13] E. Hairer, S. P. Norsett, G. Wanner, Solving ordinary differential equations I, Springer Heidelberg Dordrecht, New York, 2009. [14] M. I. Hazizah, B. I. Zarina, Diagonally implicit block backward differentiation formula with optimal stability properties for stiff ordinary differential equations, Symmetry, 11 (2019), 1342. [15] F. T. Krogh, A variable step variable order multistep method for the numerical solution of ordinary differential equations, Information Processing 68 North-Holland, Amsterdam, 194-199, 1969. [16] F. T. Krogh, Algorithm for changing the step size, SIAM J. Num. Anal. 10 (1973), 949-965. [17] F. T. Krogh, Changing step size inthe integration of differential equations using modified divided differences, Proceedings of the Conference on the Num. Sol. of ODE, Lecture Notes in Math Vol. 362, 22-71, 1974. [18] J. D. Lambert, Computational Methods in Ordinary Differential Equations, John Wiley & Sons, Inc., New York, 1973. [19] J. D. Lambert, Numerical Methods for Ordinary Differential Systems, John Wiley & Sons, Inc., New York, 1991. [20] C. B. Moler, Numerical Computing with MATLAB, SIAM, USA, 2004. [21] J. G. Oghonyon, S. A. Okunuga, N. A. Omoregbe, O. O. Agboola, A computational approach in estimating the amount of pond and determining the long time behavioural representation of pond pollutionโ€, Global Journal of Pure and Applied Mathematics, Vol. 11, 2773-2786, 2015. [22] J. G. Oghonyon, J. Ehigie, S. K. Eke, Investigating the convergence of some selected properties on block predictor-corrector methods and its applications, J. Eng. Appl. Sci. 11 (2017), 2402-2408. Int. J. Anal. Appl. 19 (6) (2021) 948 [23] J. G. Oghonyon, O. A. Adesanya, H. Akewe, H. I. Okagbue, Softcode of multi-processing Milneโ€™s device for estimating first-order ordinary differential equations, Asian J. Sci. Res. 11 (2018), 553-559. [24] J. G. Oghonyon, O. F. Imaga, P. O. Ogunniyi, The reversed estimation of variable step size implementation for solving nonstiff ordinary differential equations, Int. J. Civil Eng. Technol. 9 (2018), 332-340. [25] J. G. Oghonyon, S. A. Okunuga, H. I. Okagbue, Expanded trigonometrically matched block variable-step-size technics for computing oscillating vibrations, Lecture Notes in Engineering and Computer Science, Vol. 2239, 552-557, 2019. [26] P. Piotrowsky, Stability, consistency and convergence of variable k-step methods for numerical integration of large systems of ordinary differential equations, Lecture Notes in Math., Vol. 109, 221โ€”227, 1969. [27] S. E. Ramin, Numerical Methods for Engineers and Scientists Using MATLABยฎ, CRC Press, London, 2017. [28] P. M. Shankar, Differential Equations: A problem solving approach based on MATLABยฎ, CRC Press, USA, 2018.