Measure-theoretic probability formalizes probability using measure theory, enabling precise treatment of complex random phenomena. The Radon–Nikodym theorem provides a rigorous way to define the derivative of one measure with respect to another, crucial for expressing conditional probabilities and expectations. Conditional expectation, in this context, generalizes the classical expectation by integrating with respect to a sub-σ-algebra, allowing us to update predictions based on available information in a mathematically robust way.
Measure-theoretic probability formalizes probability using measure theory, enabling precise treatment of complex random phenomena. The Radon–Nikodym theorem provides a rigorous way to define the derivative of one measure with respect to another, crucial for expressing conditional probabilities and expectations. Conditional expectation, in this context, generalizes the classical expectation by integrating with respect to a sub-σ-algebra, allowing us to update predictions based on available information in a mathematically robust way.
What is measure-theoretic probability?
It models probability using a probability space (Ω, F, P): outcomes Ω, a σ‑algebra F of events, a probability measure P, and random variables as F‑measurable functions with expectation E[X] = ∫ X dP.
What does the Radon–Nikodym theorem say in probability terms?
If a σ-finite measure ν is absolutely continuous with respect to μ on (Ω, F), there exists a μ-integrable function f = dν/dμ such that ν(A) = ∫_A f dμ for all A in F. The function f is the Radon–Nikodym derivative (density) of ν with respect to μ.
How is the Radon–Nikodym derivative related to conditional probabilities or densities?
The RN derivative provides a density describing how one measure changes relative to another, underpinning conditional densities. In probability, conditional probabilities P(A|G) are represented by a G‑measurable function P(A|G) satisfying ∫_C P(A|G) dP = P(A ∩ C) for all C in G.
What is conditional expectation and why is it important?
E[X|G] is the G‑measurable random variable that best predicts X given the information in G, defined by ∫_A E[X|G] dP = ∫_A X dP for all A ∈ G. It generalizes ordinary expectation and is central to predictions, dependencies, and martingale theory.