PCHIP implementation
Description of the issue
The PCHIP algorithm is an algorithm for computing derivatives to be used to construct a cubic Hermite interpolation. In short, the aim of the algorithm is to construct an interpolating polynomial that preserves monotonicity in the input data. A longer description is given in [1].
On interior points of a range (so excluding the edges), the PCHIP calculates the derivatives d_i \approx f'(x_i)
of the function with the following formula:
d_i = \begin{cases} \frac{\Delta_{i-1} \Delta_i}{(1-\alpha)\Delta_{i-1} + \alpha \Delta_i},& \text{if } \Delta_{i-1}\Delta_i > 0\\ 0, & \text{otherwise} \end{cases}
where \Delta_i = \frac{y_{i+1} - y_i}{h}, h=x_{i+1} - x_i
is the slope of interval i
, and \alpha
is a weight given by
\alpha = \frac{h_{i-1} + 2h_i}{3(h_{i-1} + h_{i})}.
The implementation in interp.cc
currently looks like this (somewhat abridged, and only internal points):
code
double h[length-1];
for (unsigned long i=0; i<length-1; i++){
h[i]=x[i+1]-x[i];
}
for (unsigned long i=0; i<length-1; i++) {
delta[i] = (y[i+1] - y[i])/(h[i]);
}
for (unsigned long i=0; i<length; i++) {
d[i]=0.;
}
std::vector<unsigned long> k;
for (unsigned long i=0; i<length-2; i++) {
if (abs(delta[i])!=0. || abs(delta[i+1])!=0.) {
// NOTE 1
k.push_back(i);
}
}
double h_sum[k.size()];
double w1[k.size()];
double w2[k.size()];
for (unsigned long i=0; i<k.size(); i++){
h_sum[i] = h[k[i]] + h[k[i]+1];
w1[i] = ( h[ k[i] ] + h_sum[i]) / (3.*h_sum[i]);
w2[i] = ( h[ k[i]+1 ] + h_sum[i]) / (3.*h_sum[i]);
}
double dmax[k.size()];
double dmin[k.size()];
for (unsigned long i=0; i<k.size(); i++){
// NOTE 2
dmax[i] = max ( abs(delta[k[i]]), abs(delta[k[i]+1] ));
dmin[i] = min ( abs(delta[k[i]]), abs(delta[k[i]+1] ));
d[k[i]+1] = dmin[i] / std::conj( (w1[i]*delta[k[i]] + w2[i]*delta[k[i]+1])/dmax[i] );
}
See especially the code around the comments //NOTE 1
and //NOTE 2
.
This effectively amounts to the following formula:
d_i = \begin{cases} \frac{|\Delta_{i-1}| |\Delta_i|}{((1-\alpha)\Delta_{i-1} + \alpha \Delta_i)^*} ,& \text{if } |\Delta_{i-1}|\neq 0 \text{ or } |\Delta_i| \neq 0\\ 0, & \text{otherwise} \end{cases}
Why it matters
So the condition for setting the derivative of the interpolation to zero is if the absolute value of the change over any of the two intervals is zero. In other words, it will only be set explicitly to zero if the change over one of the intervals is exactly zero. This does not follow the philosophy of PCHIP in which a change in direction should also be cancelled by setting the derivative to zero.
Additionally there is an unexplained complex conjugate in the denominator (not explained in any comments in the source code, nor in the references).
Clearly, using the PCHIP algorithm for a function f: \mathbb{R} \rightarrow \mathbb{C}
is not straightforward, as monotonicity does not have a well defined meaning for this kind of function, and neither is the harmonic mean of complex numbers. I am not convinced that the approach used in interp.cc
fulfills all the expected behaviours of a PCHIP interpolation.
How to fix it
I propose to change the implementation to treat the real and imaginary components separately. Which would at least guarantee monotonicity of those components on their own. The absolute value would still not have such a guarantee, it should be noted. This is all up to personal preference thought, as there is in the case of impedance nothing sacred about forcing the impedance to be piecewise monotonous - it is for the most part a method to prevent "wiggles" that don't match our expectations or intuitions for the function.
Data and plots
Here is a comparison between the implementation in interp.cc
(labeled "c++") and a python implementation using scipy.interpolate for each of the components separately (labeled "scipy"), both used on real impedance data. As can be seen, the interpolation of the function itself is identical (zero difference) on all points. This is to be expected because they are plotted at the same points used to create the interpolation. The two derivatives have a quite large relative difference, however, with a relative difference of up to 1, and usually around 1e-3. Note that the plot shows the real component, but the results are very similar for the imaginary component, the magnitude, and the complex argument as well.
This impacts the computed fourier integral as well, with a relative error between a fourier integral using the two methods of around 1e-3.
For comparison, as a control group, the two fourier integral functions give much lower difference when the exact same derivatives are used (ignore the lines labeled "linear"):
It should be noted that this is not to show that the approach in interp.cc
is wrong, merely to highlight that the choice of algorithm has a large impact on the resulting fourier integral, and therefore it should be chosen carefully.
[1] F. N. Fritsch & J. Butland, A method for constructing local monotone piecewise cubic interpolants, SIAM J. Sci. Comput., 5(2), 1984, pp. 300-304)