Sequential Bayesian Filtering is how you apply repeated evidence to a moving target. There are three steps:
1. Predict: Using some Markov process, move your prior distribution forward in time so it's compatible with your new evidence. (Intuitively, everything becomes less certain as it's free to move around. Mathematically, doing this with continuous probabilities tends to mean an incredibly gross integral.)
2. Update: Using Bayes' Rule, update your probabilities with the new evidence. (Intuitively, this bunches the distribution back up. If the predict/update don't vary in time/quality, this tends to asymptotically reach some sort of balance. Mathematically, this tends to also be gross.)
3. Notreallyastep: Recycle your results as the priors in step 1 next time. (Note this means your result needs to be in the same format as your old priors if you don't want to re-solve all the math every update.)
If you get around the gross math by doing everything in finite space and brute forcing it (integrals become summations), you get a hidden Markov model.
If you get around the gross math by dingo a Monte-Carlo approximation, you get a particle filter.
If you assume your priors are normal, your evidence is normal, and your update function fits in a matrix multiplication, then you're in luck: all of the math works out so your result is also normal. That's a Kalman Filter.
Sequential Bayesian Filtering is how you apply repeated evidence to a moving target. There are three steps:
1. Predict: Using some Markov process, move your prior distribution forward in time so it's compatible with your new evidence. (Intuitively, everything becomes less certain as it's free to move around. Mathematically, doing this with continuous probabilities tends to mean an incredibly gross integral.)
2. Update: Using Bayes' Rule, update your probabilities with the new evidence. (Intuitively, this bunches the distribution back up. If the predict/update don't vary in time/quality, this tends to asymptotically reach some sort of balance. Mathematically, this tends to also be gross.)
3. Notreallyastep: Recycle your results as the priors in step 1 next time. (Note this means your result needs to be in the same format as your old priors if you don't want to re-solve all the math every update.)
If you get around the gross math by doing everything in finite space and brute forcing it (integrals become summations), you get a hidden Markov model.
If you get around the gross math by dingo a Monte-Carlo approximation, you get a particle filter.
If you assume your priors are normal, your evidence is normal, and your update function fits in a matrix multiplication, then you're in luck: all of the math works out so your result is also normal. That's a Kalman Filter.