Maybe use a Savitzky-Golay filter, or a Kolmogorov–Zurbenko filter?
Kalman filters suck (with the understanding that I don't really know what I'm talking about). They are model based and tied closely to the accurate model of a specific problem domain (for example, airplane flight dynamics, which may be unique to a specific aircraft.). For a good, yet general purpose filter that just looks at the numbers and doesn't care about the application domain and need no model of the sensor data, then consider the Savitzky-Golay (in IIR and FIR varieties). Also consider exponential smoothing and double exponential smoothing. No, don't use a moving average. That's almost never a good idea. I'd rather look at noisy data. The basic problem is to find a good low-pass, high-frequency noise filter for a given set of data.
At least, this is all what I was left feeling about the Kalman filter when I thought to use it to filter Geiger Counter detection events, and to filter accelerometer sensor readings. Maybe I was doing it wrong and I made it harder than it needed to be, so I came up with a mistaken notion. Often, engineers who are not really that familiar with a problem want a magic formula to make the problem go away. A "close enough" hand-grenade solution, good for all situations.
Oh, but now I find out about the Kolmogorov–Zurbenko filer, which gets high praise and yet it is a form of moving average filter. There's even the KZFT -- Fourier Transform with a KZ filter.
See also Smoothing.
- "Generalized IIR Savitzky-Golay Filters: Derivation, Parameterization, Realization and an Adaptive Configuration" by Hugh Lachlan Kennedy. Quoting the author advising how to filter mobile phone accelerometer sensor output (sensor fusion), he said, "Use recursive filters to fit a polynomial to each of your channels, using discounted least-squares regression. Monitor the vector of estimated model coefficients, produced by your filter bank (i.e. the Laguerre spectrum), for sudden changes. A more reliable method (to reduce false alarms) would be to look for squared changes that are large relative to the estimate of the sensor noise variance, which is recursively derived from the residual of the polynomial fit..." He answers another question about general purpose sensor fusion and anomaly detection, "For anomaly detection in data streams with a uniform sampling period I like using low-order IIR filters designed using polynomial regression. Using an IIR filter, with an exponentially decaying weight to analyse the data allows long histories to be processed efficiently using recursion. The variance of the estimate is also estimated recursively. So any measurements that are beyond some multiple of the estimated std dev may be declared as outliers/anomalies.
I guess most people would use a Kalman filter for this sort of thing, Which is fine if you have good prior knowledge of the process and the measurement stats. Using polynomials is great if all you know is that your process is confined to the very low-frequency region of the spectrum, so that a simple smoother suffices."
- "Multidimensional Digital Smoothing Filters for Target Detection" by Hugh L. Kennedy, published: Signal Processing, Volume 114, September 2015, Pages 251-264