Although I was a computer science minor, I’d never heard of statistical machine learning until after college. Now I dabble in machine learning on the side. In the long run, I’m interested in studying the intersection of operations research and learning, i.e. intelligent optimization systems. Two years ago, I stumbled across Peter Norvig’s essay How to Write a Spelling Corrector. Google is a notoriously good spelling corrector; just try googling “spellign.” I find that Google knows what word I’m trying to spell even when an application’s built-in spell check fail. Norvig explains how to use Bayes theorem to write a pretty-good spelling corrector in 21 lines of Python. In college, I had good grounding in probability and computing, but I don’t recall having seen the two mixed so elegantly. I’m pleased to now know that Norvig was only scratching the surface! This week, I’ve been watching Sebastian Thurn’s lectures on Kalman Filters for AI. Probabilistic techniques for filtering noisy data can be used, for example, in a robot keeping track of objects moving around it.