This paper systematically analyzes and enriches the observational learning paradigm of Banerjee (1992) and Bikhchandani, Hirshleifer, and Welch (1992). Our contributions fall into three categories.
First, we develop what we consider to be the 'right' analytic framework for informational herding (convergence of actions and convergence of beliefs, using a Markov-martingale process). We demonstrate its power and simplicity in four major ways: (1) We decouple herds and cascades: Cascades might never arise, even though herds must. (2) We show that wrong herds can arise iff the private signals have uniformly bounded strength. (3) We determine when the mean time to start a herd is finite, and show that (absent revealing signals) it is infinite when a correct herd is inevitable. (4) We prove that long-run learning is unaffected by background 'noise' from crazy/trembling decisions.
Second, we explore a new and economically compelling model with multiple types, and discover that a 'twin' observational pathology generically appears: confounded learning. It may well be impossible to draw any further inference from history even while it continues to accumulate privately-informed decisions!
Third, we show how the martingale property of likelihood ratios is neatly linked with the stochastic stability of the learning dynamics. This is not only allows us to analyze herding with noise, and convergence to our new confounding outcome, but also shows promise for optimal experimentation.