The Solve commandSolve_commands and the built-in SOLVE and NSOLVE functions can find real or complex solutions to individual algebraic equations and to systems of polynomial equations using exact and/or approximate methods.  To supplement these capabilities, the file EquationSolving.mth defines functions useful for finding approximate solutions (real or complex) to individual algebraic equations and to systems of algebraic equations using iterative methods.  The function definitions in the file are automatically loaded when any of its functions are first used.


Given a univariate expression u(x) and an initial guess of x0, Newton's method uses repeated applications of the update formula xnew x - u(x)/u(x)'  evaluated at the current value of x to find an x such that u(x) is arbitrarily close to zero.  Newton's method can be used to find an approximate solution of a univariate equation by taking the difference of the two sides of the equation.  Also, the method can be extended to solve a system of m equations dependent on m variables.   Note that Newton's method requires that Derive be able to differentiate the expressions with respect to each of the solution variables.


NEWTON(u, x, x0, n) approximates to a vector of n+1 approximations for the variable x resulting from n applications of Newton's method to the univariate expression u(x) beginning with an initial guess of x0.  NEWTON(u, x, x0) approximates to a vector of the approximations for x until they converge at the current digits of precision.  For example, at 10 digits of precision, the expression

NEWTON(x^2-3, x, 2)

approximates to

[2, 1.75, 1.732142857, 1.732050810, 1.732050810, 1.732050810]

Note that the last element is a good approximation to the square root of 3.


If u is a vector of m expressions, x is a vector of m variables, and x0 is a vector of m initial guesses, NEWTONS(u, x, x0, n) approximates to a matrix of n+1 rows of approximations for the variables of x from n applications of Newton's method to the expressions in u beginning with the initial guesses in x0.  To solve a system of m equations in m variables, call NEWTONS with a vector of the difference in the two sides of each equation, a vector of the m variables, and a vector of the m initial guesses.  For example, to solve the equations exp(x·y) = x + 2·y and atan(x·y) = 2·x+y, approximate the expression

NEWTONS([EXP(x·y)-x-2·y, ATAN(x·y)-2·x-y], [x, y], [-1, -1], 7)

This results in an 8-row by 2-column matrix whose last row is an approximate solution to the equations based on 7 applications of Newton's method.  This solution can be verified by substitution of these values for x and y into the original equations.


NEWTONS(u, x, x0) approximates to a matrix of approximations for the variables of x until all converge at the current digits of precision.  Note that different rational numbers may look the same when displayed in scientific or decimal notation.  For example, the expression

NEWTONS([EXP(x·y)-x-2·y, ATAN(x·y)-2·x-y], [x, y], [-1, -1])

approximates to a matrix having 10 rows at 6 digits of precision and a matrix having 39 rows at 10 digits of precision.


If you are interested in solutions containing complex numbers, use such numbers in the components of the initial guess.  When a complex solution is found, its complex conjugate is often a good guess for another solution since complex solutions often occur in conjugate pairs.


Even if there is a solution, Newton's method may not converge to it unless your initial guess is sufficiently close.  Also, divergence may generate large magnitude numbers that exhaust memory and/or take a long time to abort.  Thus, it is usually best to begin by limiting NEWTONS to a small number of iterations.  Then by examining the last few rows of the resulting matrix, you can determine whether Newton's method is converging.  If convergence does not begin within ten iterations or so, it is usually best to try a different initial guess rather than increasing the number of iterations.


If s is the last row of the resulting matrix, you can approximate the limit of u as x approaches s to determine if the residual is acceptably small.  If s has converged to within roundoff error at the current precision and you want more accuracy, increase the precision by a few digits and use s as your initial guess.  This is usually more efficient than doing the whole calculation at the higher precision.


If Newton's method is taking an unexpectedly long time, the iteration may be diverging.  Abort the computation, and retry using a smaller number of iterations or a better initial guess.


NEWTONS only finds one solution at a time.  To find additional solutions, you can use a different initial guess.  However, NEWTONS may still converge to the solution you already know.  To prevent this, the original system can be reduced to a deflated system of equations.  If the vector [s1, s2, ..., sm] is a solution to the system represented by the vector u, the vector for the deflated system is

u
————————————
(x1-s1) (x2-s2) ... (xm-sm)


If you find a solution to the deflated system, you can apply NEWTONS to the original system using this solution as an initial guess.  This eliminates roundoff error caused by deflation and gives a more accurate, purified solution.


The difficulty of finding sufficiently good initial guesses increases sharply with the number of variables.  Consequently, it is usually wise to eliminate as many variables as possible using the Solve > Expression command1PET_XA before resorting to NEWTONS.


Another way to solve a system of equations approximately is to convert the system to a vector fixed-point form x=g, with g=[g1, g2, ..., gm], where expressions g1 through gm may each depend on any or all of the variables in x=[x1, x2, ..., xm].  You can then substitute the initial guesses for x1 through xm into g to compute updated guesses for x.  You can continue to substitute the most recent set of components into g to compute a more recent set of guess components.


If there is a solution, the magnitudes of the partial derivatives of g1 through gm are small in the neighborhood of the solution, and the initial guess is close to the solution, then fixed-point iteration may converge to the solution and the convergence may occur in an acceptable amount of time.  To compensate for these discouraging provisos, fixed-point iteration does not require differentiability and does not require space to store derivatives, unlike Newton's method.


FIXED_POINT(g, x, x0, n) approximates to an n+1 by m matrix.  g is the vector of expressions [g1, g2, ..., gm].  The other arguments, the returned result, and the usage suggestions are similar to those described for NEWTONS.  For example, use complex numbers in the components of your initial guess if you are interested in complex solutions.


However, FIXED_POINT may diverge no matter how close the initial guess is to an exact solution:  The most important determinants of success are choosing a close enough initial guess and choosing a particular fixed-point form for which g1 through gm have small magnitude partial derivatives near the guess:  Ideally each equation has a portion that is invertible with respect to a distinct variable, and the magnitude of the partial derivative of this portion with respect to that variable is dominant near your guess.  You can then solve each equation for its dominant variable xk in its dominant portion in terms of the other portions to obtain the form xk=gk.  The same variable may occur on the right side too, but weakly.


Even if an equation cannot be solved exactly and extra parameters make it impossible to solve numerically, it may still be possible to find a truncated Taylor series solution.  The series solution can extend an exact or approximate solution at a particular point into a useful neighborhood around the point.  Of course, it is necessary that a truncated series solution exists at the point.


TAYLOR_SOLVE(u, x, y, x0, y0, n) simplifies to an nth degree truncated Taylor series solution y(x) of the equation u(x, y)=0, given u(x0, y0)=0.  For example, since y0=0 at x0=0 is a solution of the equation sin(y) + y + x·exp(x) = 0,

TAYLOR_SOLVE(SIN(y) + y + x·EXP(x), x, y, 0, 0, 3)

simplifies to

       3      2        
   25·x      x      x  
- ——————— - ———— - ——— 
     96       2     2  


This is the 3th degree truncated Taylor series solution of the equation expanded about the point y0=0 at x0=0.  Note that it is impossible to solve this transcendental equation exactly for y.  Also the extra parameter x makes it impossible to solve the equation numerically.


TAYLOR_INVERSE(u, x, y, x0, n) simplifies to an nth degree truncated Taylor series expansion of the inverse of the function defined by y=u(x), expanded about y0=u(x0).  This is useful when u(x) cannot be inverted exactly in closed form.  For example,

TAYLOR_INVERSE(x·EXP(x), x, y, 0, 3)

simplifies to

    3           
 3·y      2     
—————— - y  + y 
   2            


Other Utility File LibraryG5BS2R 

Created with the Personal Edition of HelpNDoc: Easy CHM and documentation editor