faster if expansion sequence is unknown (i.e. we don't know it it's a power series or a log series for instance); slower, if the expansion sequence is known.
For example to find roots of an equation we need to express it as:
where is the solution we're looking for. Then starting from a guess (which if possible should be chosen to be the solution for , so that the solution is right to order 1 at least.), then we iterate:
and the iterations should get better if (prime = derivative), and is suitably chosen. However, to get asymptotic expansion we actually require as . In particular, if , one gets one term in a power-series expansion, per iteration, as can be seen from argument in notes, where we see that the difference between true answer and answer gets multiplied by at every iteration. If we don't know the order of , the way to check if the iteration is right up to some order is to try one more iteration and seeing if the term changes (Though I don't think that's definite proof).
The usual procedure is to place the dominant term of the equation on the side (i.e., the side that will give the new value), so that it can be calculated as a function of the terms on the side (i.e., the previously-obtained value). As we will see later, the identity of the dominant term can be adjusted by scaling. I think we place the dominant term of the equation on the side because that ensures we choose that term to be right to first order in the 0th iteration, and so the equation is right to first order. In the simple example of , which comes from , we selected the term, if we had selected the , we would have to divide by and the case would not be well defined, indicating that we want to get the dominant term right in the equation. Another way to look at it, is dominant balance, by putting the dominant term on the LHS, the approximately expresses dominant balance!
For the iterative method, different functions may be needed to find different perturbed roots of an algebraic equation, so that condition as is satisfied.
The proof that this method works is based on a Fixed-point theorem, in particular on the contraction mapping theorem, also used proof Fractals are well defined.
See more at Fixed-point iteration
If |gradient|<1, iteration doesn't converge: