tan
(
x+h
)
−tan
(
x−h
)
2$h
to
sin
(
x+h
)(
cos
(
x−h
)
−cos
(
x+h
)
sin
(
x−h
))
2
$
h
$
cos
(
x+h
)
cos
(
x−h
)
This is a clever trick, because it converts the tangent function, which has asymptotes, into a function of
sines and cosines, which don't have asymptotes. Any numerical differentiator is going to have trouble
wherever the function is changing rapidly, such as near asymptotes.
nDeriv() also transforms some other built-in functions:
sinh(), cosh(), tanh() uses the exponential definitions of the hyperbolic functions
log() converts expression to natural log
e
x
converts expression to use e^x form of sinh()
10
x
converts expression to use sinh()
tanh
-1
() converts expression to use ln()
but nDeriv() directly evaluates these functions:
ln(), sin
-1
(), cos
-1
(), tan
-1
(), sinh
-1
(), cosh
-1
()
nDeriv() also simplifies polynomials before calculating the central difference formula. For example,
nDeriv() converts
3x
2
+ 2x + 1
to
6(x + 1/3) = 6x + 2
Notice that in this case h drops out completely, because the central difference formula calculates
derivatives of 2nd-order equations exactly.
While this is a laudable approach in terms of giving an accurate result, it is misleading if you expect
nDeriv() to really return the actual result of the central difference formula. Further, you can't take
advantage of this method with your own functions if those functions are not differentiable by the
89/92+. This example establishes the need for an improved numerical differentiation routine.
In Ridders' method the general principle is to extrapolate for h=0. nder1() implements this idea by using
successively smaller values of the starting interval h. At each value of h, a new central difference
estimate is calculated. This new estimate, along with the previous estimates, is used to find higher
order estimates. In general, the error will get better as the starting value of h is increased, then
suddenly get very large. The table below shows this effect using nder1() for f(x) = tan(x), where x=1.
6 - 37