What is the Law of Large Numbers (LLN) in probability theory?
The LLN states that as the number of iid observations n increases, the sample average X̄_n converges to the population mean μ. In the weak form, X̄_n → μ in probability; in the strong form, X̄_n → μ almost surely.
What does the Central Limit Theorem say about the distribution of the sample mean?
For iid X_i with mean μ and finite variance σ^2, the standardized sample mean √n (X̄_n − μ) converges in distribution to N(0, σ^2). Equivalently, X̄_n is approximately N(μ, σ^2/n) for large n.
How do LLN and CLT relate and why are they both important?
LLN guarantees the average approaches the true mean as n grows, while CLT describes the distribution around that mean for large n, enabling normal-based inference (confidence intervals, hypothesis tests). LLN ensures convergence; CLT justifies normal approximations.
What are typical conditions under which the CLT holds?
For iid variables, finite mean μ and finite variance σ^2 suffice. More generally, independence with certain moment conditions (Lindeberg or Lyapunov) extend the theorem to non-identically distributed cases.