Oseledets's Proof Reaction
2026-01-15
I just saw a proof of Oseledets’s theorem, the ideas of which were accredited to Kaimanovich and Karlson-Margulis. It was super cool.
A rough statement of the theorem is that given a pmp ergodic system $f : X \to X$ with $(X,\mathcal{B},\mu)$ a probability space, $A: X \to \text{GL}_d \R$, and an integrability condition on $A$, for almost every $x \in X$ there is a filtration
$$\R^d = F_1(x) \geq F_2(x) \geq \cdots \geq F_{k+1}(x) = \{0\}$$
and Lyapunov exponents $\chi_1(x) > \cdots > \chi_k(x)$ corresponding to vectors in $F_i(x) \setminus F_{i+1}(x)$.
I thought the proof was very cool because it moves the setting of the problem to a symmetric space $Y \coloneqq \text{GL}_d\R / \text{O}(2)$ where, as I recall possibly incorrectly, the Lyapunov exponents wind up being the eigenvalues of a special element of the Lie Algebra which is the direction from $o$ of a geodesic around which a Random walk of a product of matrices $y_n \coloneqq [A(f^{n-1}x) \cdots A(x)]^{-1}o$ “cling”. This allowed for nice pictures to be drawn which made the proof really intelligible — nice landmarks to help place inequalities and tricks. The eigenspaces corresponding to the eigenvalues give the desired filtration.
The technical bits amount to a lemma which makes the “clinging” concrete in terms of some inequalities and well-chosen sequences. The $\text{CAT}(0)$ property is used a lot at this stage to bound how far $y_n$’s get from the clingy direction. Professor Wilkinson called this lemma the “Good Times Lemma.” I might be missing the referrence. The proof also uses Kingman’s theorem, but it is unclear to me exactly why we need it. I think it is used to show that the random walk is contained in a linear cone around the clingy direction. I am not sure if it is used elsewhere, but I think that is accurate. I imagine if it is used elsewhere it’s in the Good Times Lemma, which I’ll need to sit down and hash out on my own.
What do we get out of using a symmetric space?
- a left-invariant metric
- This allows us to use $y_n$ as defined above with the inverse. I don’t know why this is necessary/helpful — I don’t see why we couldn’t just translate (literally, i.e. under the metric) our arguments to $y_n'$ which is just missing the inverse.
- The distance between $o$ and $Bo$ for an element $[B] \in Y$ captures information about the singular values of $B$ (with multiplicity): $d_Y(o,Bo) = ||(\log \sigma_1(B), \log \sigma_2(B), …, \log \sigma_d(B))||_{2}$.
- Related to the last point is that essentially taking the log of $A^n(x)$ approaches Lyapunov exponents as $n \to \infty$
- Wilkinson said that we use that the symmetric space is uniformly locally convex and something else which I didn’t write down and can’t remember for the life of me. I don’t know what uniformly locally convex means and I don’t know how we used it.
- We might have needed (at least used) the left-invariance of the metric to show that distances along the random walk from the origin were a subadditive and thus use Kingman’s theorem. I suppose that’s one thing that working in a symmetric space gives you.
- We get the standard Lie theory that allows us to apply the “essentially the log function”, but that’s really only useful insofar as it allows us to get back the vector of Lyapunov exponents.
- Do we even use negative (non-positive, is it anywhere zero because of the “G” in “GL”?) curvature?
Another question: why is Oseledets’s theorem sometimes referred to as the “multiplicative ergodic theorem”?