We introduce algorithms that use predictions applied to the input, such as from machine learning, to circumvent worst-case analysis. We aim for algorithms that have near-optimal performance when these predictions are good but maintain provable bounds (such as for worst-case performance) even when the predictions have large errors. We look at several examples of recent results showing how predictions can be used effectively while still allowing for theoretical guarantees, focusing on our own work in scheduling and Bloom filter data structures.
Michael Mitzenmacher is a Professor of Computer Science in the School of Engineering and Applied Sciences at Harvard University. Michael has authored or co-authored over 200 conference and journal publications on a variety of topics, including algorithms for the Internet, efficient hash-based data structures, erasure and error-correcting codes, power laws, and compression. He is an ACM and IEEE Fellow. He has co-authored a widely used textbook on randomized algorithms and probabilistic techniques in computer science published by Cambridge University Press.