A COMPUTATIONAL STRATEGY OF VARIABLE STEP, VARIABLE ORDER FOR SOLVING STIFF SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS J. G. OGHONYON*, P. O. OGUNNIYI, I. F. OGBU

This research study focuses on a computational strategy of variable step, variable order (CSVSVO) for solving stiff systems of ordinary differential equations. The idea of Newton’s interpolation formula combine with divided difference as the basis function approximation will be very useful to design the method. Analysis of the performance strategy of variable step, variable order of the method will be justified. Some examples of stiff systems of ordinary differential equations will be solved to demonstrate the efficiency and accuracy. NOMENCLATURE CSVSVO: errors in CSVSVO for solving test application problem 1, 2 and 3. Memployed: approach employed. Maxerrors: the magnitude of the maximum errors of CSVSVO. ConvCriteria: convergence criteria Source of Application Problem I: see [5] for more info Source of Application Problem II: see [28] for more info. Source of Application Problem III: see [18] for mnore info. Int. J. Anal. Appl. 19 (6) (2021) 930


INTRODUCTION
In diverse applied sciences, like chemical kinetics, mass-spring-damper systems, and control system analysis, we find systems of differential equations whose analytical solutions comprise terms with magnitudes that change at rates that are substantially unlike. For instance, whenever the analytical solution includes the terms − and − , with , > 0, where the magnitude of is majorly greater than , then − decays to zero at extremely quicker rate than − does. In cases of a quickly decaying transient analytical solution, a sure computational technique turns unstable except the step length is immoderately small. Explicit techniques universally are submitted to this stability control, which necessitates the usage of very small step length not only necessarily improve the amount of functions to find the analytic solution, and as such stimulates round-off error to spring up, hence, having bounded accuracy and efficiency. Implicit techniques, then again, are release of stability limitations and are thus favourable for computing stiff systems differential equations [27].
The conception of stiff initial value problems can be best valued by studying the succeeding general one-dimensional systems with changeless constants: where is an × matrix with actual entries and ( ), , ′ are − .
Definition 1: A solution vector (or solution) of the system (1) on the interval is an × 1 matrix (or vector) of the form where the ( ) are differentiable functions that gratifies (1) on [1] for details.
where In(TOL) is the exponential logarithm of TOL.
The stiffness ratio as established by (3) is a standard of the dispersion of the fourth dimension constants for (1), and in actual problems, may be of the order of 10 8 . See [8] for more info.
Nevertheless, it should be observed that this is a quite general resolution with respect to mathematics. Stiffness takes place whenever the step length is limited by stability, rather than order, conditions. See [8] for more info. for at least single pair of ∈ 1 ≤ ≤ . See [8] for more info.
Definition 6: Stiffness occurs when stability requirements, rather than those of accuracy constrain the step length. See [18][19] for details.
Definition 7: Stiffness occurs when some components of the solution decay much more rapidly than others. See [18][19] for details.
Authors have contributed immensely towards solving stiff systems of ordinary differential equations using diverse strategies. [9] implemented the extensions of the predictorcorrector method for the solution of systems of ordinary differential equations. [11] formulated the resultant of variable mesh size on the constancy of multi-step methods. [12] constructed the constancy and convergency of variable order multi-step methods. [14] developed the diagonally implicit block backward differentiation formula with optimal stability properties for stiff ordinary differential equations. [15] launched a varying-step, varying-order multistep method for the numerical solution of ordinary differential equations. [16] designed the algorithms for changing the step size. [17] worked on changing step-size in the integration of differential equations using modified divided differences. [21][22][23][24][25] developed and implemented a variable step, variable step size with same order for solving ordinary differential equations. [26] derived the constancy, consistence and convergency of varying K-step methods for numerical integration of large-systems of ordinary differential equations. This research is designed to extend the idea of [21][22][23][24][25] by inventing variable step, variable order together with variable step size for solving stiff systems of ordinary differential equations. [18][19] specifies that this idea leads to better efficiency and accuracy and as well bypass theorem 4.
The variable step, variable order of the predictor-corrector algorithmic program came forth as result of the broad computational experience that holds throughout the years. This VSVO-P-CAP is fundamental to high level efficiency and accuracy with the potential to change automatically not exclusively the step length but as well the order (and thus the step number) of the techniques utilized. Algorithmic program with such a potential are recognized as variable step, variable order or VSVO, algorithmic program. Apart from been built to handle nonstiff initial value problems, though various subsisting VSVO codification admit alternative for stiff systems. The necessary elements of VSVO algorithmic programs are: • a family of methods, • a starting procedure, • a local error estimator, • a strategy for determining when to vary step length and/ or order, and • a technique for varying step length and/ or order • a written softcode in any mathematical packages is required if manual computation is very tedious.
• a special basis function approximation for estimating stiff systems is necessary if the desired accuracy and efficiency is not achieved.
In addition, the convergence attributes of predictor-corrector methods antecedently proved on the presumption of changeless step length and changeless order still maintain in a VSVO algorithmic program conceptualizations. The results of [11][12] indicate that a VSVO algorithmic program established on Adams-Bashforth-Moulton pair in mode (ABM) with stepvarying attained by a variable coefficient technique is ever convergent ( as the maximal step length used in the time interval of integration inclines to zero). Whenever an interpolatory technique is utilized then convergence is ensured whenever the step/order-varying technique is such that there subsists a changeless such that in any sequential steps there are ever of changeless length considered by the same ℎ − ABM method, for some value of .
These results stress so far once again that variable coefficient technique, although in general is more costly and cumbersome to carry out, are essentially more effective than interpolatory techniques. See [18][19] for more details.
The succeeding sections will demonstrate the usefulness of these strategies. [4] was evidently the first author to propose a "placid" conversion of a step-size ℎ to novel step-size ℎ. [9,15] unfolded his ideas: we study an arbitrary grid ( ) and announce the step sizes by ℎ = +1 − . We presume that estimations to ( ) are recognized for = − + 1, … , and insert ( , ) and announce ( ) as the multinomial which interpolates the values ( , ) for = − + 1, … , . Utilizing Newton's interpolation formula we get

II. MATERIALS AND METHODS
where the divided differences It is virtual to rescript (6) as where We immediately specify the estimations to ( +1 ) by Replacing equation (6) into (10) we have with Equation (11) is the elongation of the explicit Adams method (1) to variable step sizes [13].
The variable step size implicit Adams methods can be inferred likewise. We assume * ( ) be the multinomial of degree that interpolates ( , ) for = − + 1 (the value +1 = ( +1 , +1 ) comprises the unknown physical solution +1 ). Once more, employing Newton's . The numeric solution specified by , is established immediately by where +1 is the numeric estimation got by the explicit Adams method and where Defining the reoccurrence relations for ( ), Φ ( ) and Φ * ( ).
The price of calculating consolidation coefficients is the greatest weakness to allowing arbitrary fluctuations in the step Size [16].
Whenever we presume a continuous ( + 2) ℎ derivative for , then we can replace the Taylor's series for and ′ with 0(ℎ +1 ) remains. Whenever the terms in ℎ 0 , ℎ 2 , … , ℎ +1 are assembled unitedly, we will arrive at The linear equations = 0, ≤ , are the equations which decides an ℎ order method.
Therefore the natural standization is to take Theorem 5: Whenever the multi-step method (29) is unchanging and of order 1 so it is convergent. Whenever the method (29) is unchanging and of order so it is convergent of order [13].
Where the linear difference operator specified by Proof Enlarging ( + ℎ) and ′ ( + ℎ) using Taylor series and inserting the truncated series.
Put the Taylor series expansion into (20) gives ).
We will continue to express that the three precondition of proposition 1 are tantamount. The individuality operator where announces the exponential function, unitedly with which succeeds from (21), proves the par of the preconditions (i) and (ii).

B. Convergence
Convergence for variable step size Adams method was first considered by [26]. In order to show convergence for the general case we present the vector = ( + −1 , … , +1 , ) . The method + + ∑ + = ℎ + −1 −1 =0 ∑ + =0 (22) then turns tantamount to the precise values to be estimated by . The convergence theorem can immediately be developed as succeeds. See [13] for more info.
Theorem 7: Assume that • the method (27) is stable of order , and has bounded coefficients and ; • the starting values satisfy ‖ ( ) − 0 ‖ = (ℎ 0 ); • the step size ratios are bounded ( Then the method is convergent of order , i.e., for each differential equation where ℎ = ℎ . See [13] for more info.

Proof
Because the approach has order p and the physical coefficients and step-size ratios are restricted, the expression becomes We justify that the truncated-local error gratifies +1 = (ℎ +1 ).
Deducting (23)  As in the proof of theorem 1, we derive that the Φ gratifies a uniform Lipschitz precondition with respect to . This unitedly with stability and (26), means that To figure out this difference, we bring in the succession { } specified as A simple induction statement proves that From (27) we get for ≥ 1 so that in addition The inequality unitedly with (28) completes the proof of theorem 7. See [13] for more info.

C. Implementing the Convergence Criteria of Variable Step, Variable Order
The use of Milne's estimate for the principal local truncation error necessitate that predictor-corrector method possess like order. This is attained by accepting the predictor to be a − Adams Bashforth method and the corrector to be a ( − 1) − Adams-Moulton, both then possess order = . The − ℎ order ABM pair is therefore Whenever we imagine (29) being employed in ( ) 1− mode then, in the second of (29), +1 will be substituted by +1 [ +1] , and the one value +1 on the right side by We may rewrite (30) in the form where the notational system is specified by (29).
We immediately employ the pair (29) in ( ) 1− mode, and utilize the structure of the Adams methods to formulate a form of ABM method which is computationally commodious and frugal. The mode is specified by Since +1 * = * and +1 = , the Milne estimate for the principal local truncation error at +1 (which we shall denote by +1 ) is established by . See [2, 6-7, 18-19, 21-25] for more info.

III. Practical Examples of Stiff Systems of First Order ODEs
We are interested with the computation interpretation of these attributes.
A problem is stiff whenever the analytical solution being looked for changes tardily, but there are close solutions that change speedily, so the numerical approach must consider small steps size to get acceptable results. See [20] for more info.
Application Problem 1 An Engineering Example: In chemical engineering, a complicated production activity may involve several reactors connected with inflow and outflow pipes. If there are n reactors, the whole process is governed by a system of differential equations of the form . See [28] for more info. . See [18] for more info.

V. CONCLUSION
Applications problem 1, 2 and 3 represents Stiff systems of ordinary differential equations which seems to generates an unstable system behavior, and as such requires a technical approach like CSVSVO to guarantee an improve efficiency and better accuracy. The stiff systems of ordinary differential equations are carried out employing the CSVSVO implementation. The CSVSVO has the capacity to introduce the convergence criteria in order to engender the desired result is achieved. These convergence criteria decide whether the result is accepted or rejected. The CSVSVO implementation is done utilizing a multiprocessor approach executed under the Mathematica Software platform. Tables 1, 2 and 3 displays the computational results established that the CSVSVO is reached via the convergence criteria of the following: 10 −3 , 10 −3 , 10 −4 , 10 −5 , 10 −6 , 10 −7 , 10 −8 , 10 −9 , 10 −10 10 −11 . In addition, with the trend of the maximum errors achieved via the different convergence criteria, we can conclude that the CSVSVO is capable of resolving stiff systems of ordinary differential equations with better efficiency and accuracy as exhibited in Tables 1, 2 and 3. See [5,18,[21][22][23][24][25]28] for details.